After being given still yet another 'Cook's tour', the OECD's neoliberal ad hoc panel left Japan with their typical neoliberal convictions further entrenched. Outside of such polite circles, talk is that the 'public' is now upset that with the 'big bang' reforms of 2004, surprise surprise, tuition costs went up at the former national and former public universities. And that the reforms have been nothing but an unmitigated disaster while at the same time the predicted demographic disaster has failed to materialize. But the neoliberals of the OECD will just say, "Ah ah, too little too late." Another thing that is happening is that, with the LPD's fall from power, the Ministry of Education is in disarray (more than its usual disarray that is) and the former national universities are headed into financial and fiscal decline.
The full OECD report is available here. JPN HEO will try to analyze the report and write a review in the near future.
http://www.oecd.org/dataoecd/44/12/42280329.pdf
Excerpt one:
It is against this background that, on April 1st 2004, Japanese higher education underwent the kind of ‘big bang’ reform which was unprecedented. Though regarded with some hostility within the universities themselves, there was a widespread political and public sentiment that reform was overdue and that, in comparison with the higher education systems among Japan’s traditional peers in North America, Australasia and Europe, Japanese universities were falling behind. The reforms, at least in their intention, were fundamental and far-reaching. As a result, though a few years have elapsed since the reforms were introduced, their impact is still working its way through. Japanese tertiary education is still in transition. The desired benefits of the reforms are not yet secured and if they do not materialise, both political and public patience is likely to wear thin. There is a widespread demand that the tertiary education system become, via the modernisation agenda embedded in the reforms, more responsive, more agile, more globally competitive and accompanied by higher standards and higher quality all round.
Excerpt two:
In spite of the pace and scope of change in recent years, much remains to be done. The Ministry of Education, Culture, Sports, Science, and Technology begun to change from an organisation accustomed to exercising detailed managerial and financial direction of higher education institutions into one that no longer does so. However, it has not yet fully worked out its new role within the tertiary system, nor has it fully equipped itself with the performance-based information and repertoire of incentives that it needs to monitor and shape the activities of newly autonomous institutions. And, for their part, some higher education institutions appear keen to operate as they long have done, holding fast to the Humboldtian vision of the university and to long-standing institutional practices - with respect to academic careers, to internal resource allocation, and institutional leadership.
As we have outlined in the report, we think university leaders and ministry officials have much to gain from embracing continued change; indeed, that it is the necessary condition for gaining wider public investment in the sector. And, we think that central government authorities and stakeholders outside of government have much to gain from enlisting their support. During the course of our visit we met with men and women - professors, administrators, and civil servants - who clearly grasped the new possibilities that deepened reform makes possible, and who are eager to press forward with change. Working together with patience, trust, and understanding they can ensure that Japan system of tertiary education stands as a model to the entire OECD, and, indeed, the wider world.
Featuring news, information, analysis and commentary on higher education in Japan.
18 December 2009
Plans for Manga Library Run into Trouble with New Government in Japan
It looks like Meiji University will become the host to the new manga library though.
1. http://www.google.com/hostednews/afp/article/ALeqM5gJAzSuC3AOKkfxHPbRwOEBevmQKg
Japanese university plans huge 'manga' library
(AFP)
excerpt:
>> TOKYO ? In a move to promote serious study of Japanese manga, a university in Tokyo plans to open a library with two million comic books, animation drawings, video games and other cartoon industry artifacts.
Tentatively named the Tokyo International Manga Library, it would open by early 2015 on the campus of the private Meiji University, and be available to researchers and fans from Japan and abroad. <<
excerpt:
>> The former conservative government of Taro Aso, which was ousted in August elections, had earmarked 11.7 billion yen (128 million dollars) for a museum on Japanese cartoon art and pop culture to be built in Tokyo.
But the plan, part of wider stimulus measures, was axed by the new centre-left government, which criticised the construction as a "state-run manga cafe" that has nothing to do with boosting the economy. <<
2. http://www.exfn.com/manga-library-planned-for-japan
>> A Tokyo university is planning to open a library to promote serious study of Japanese manga comics.
The proposed Tokyo International Manga Library will house two million comic books, animation drawings, video games and other cartoon industry artefacts.
It is hoped the new library will open by early 2015 at Meiji University. <<
1. http://www.google.com/hostednews/afp/article/ALeqM5gJAzSuC3AOKkfxHPbRwOEBevmQKg
Japanese university plans huge 'manga' library
(AFP)
excerpt:
>> TOKYO ? In a move to promote serious study of Japanese manga, a university in Tokyo plans to open a library with two million comic books, animation drawings, video games and other cartoon industry artifacts.
Tentatively named the Tokyo International Manga Library, it would open by early 2015 on the campus of the private Meiji University, and be available to researchers and fans from Japan and abroad. <<
excerpt:
>> The former conservative government of Taro Aso, which was ousted in August elections, had earmarked 11.7 billion yen (128 million dollars) for a museum on Japanese cartoon art and pop culture to be built in Tokyo.
But the plan, part of wider stimulus measures, was axed by the new centre-left government, which criticised the construction as a "state-run manga cafe" that has nothing to do with boosting the economy. <<
2. http://www.exfn.com/manga-library-planned-for-japan
>> A Tokyo university is planning to open a library to promote serious study of Japanese manga comics.
The proposed Tokyo International Manga Library will house two million comic books, animation drawings, video games and other cartoon industry artefacts.
It is hoped the new library will open by early 2015 at Meiji University. <<
Setting the record straight on the language policies of JALT
Well, I've waited over ten years to get a serious response from Oda or Braine.
See:
Message 2: Response to Masaki Oda's chapter in Braine's new book
Date: Tue, 29 Jun 1999 16:49:03 +0900
From: Charles Jannuzi <[e-mail deleted]>
Subject: Response to Masaki Oda's chapter in Braine's new book
[On Monday, June 28, LINGUIST posted a review of George Braine's new book Non-Native Educators in English Language Teaching (Review by Mae Wlazlinski, LINGUIST Review, 10.999.) We received several messages with comments concerning the chapter by Masaki Oda and its topic of English centrism in Japan, specifically in JALT. The messages are included here and we open up discussion on the matter.] I am interested in this title--specifically in the Oda chapter. I have not read his chapter or the rest of the book yet, but I was involved unwillingly in forming Oda's scholarship on the languages of JALT.
There was a debate in the pages of 'The Language Teacher', JALT's monthly magazine, that Oda instigated. The discussion actually started when Richard Marshall pointed out that should JALT change to a bilingual (English and Japanese) official language policy, there might be a concern about JALT's international status in its publications and conferences (to some extent dominated by anglophone scholars who do not live and work in Japan).
It was a coherent argument put forward by Marshall, at least in my opinion, calling for caution on a switch to an official two-language policy, especially in light of the fact that not that many people are going to volunteer to translate and interpret for free and JALT doens't have the money to pay for it. JALT did enact a two-language policy. Oda responded to Marshall in the pages of TLT, accusing Marshall and the leadership of JALT of "linguistic imperialism" and "linguicism".
I responded in support of Marshall to this extent: I agreed that JALT should have a two-language policy, but that Marshall and the leadership of JALT were not linguicists or linguistic imperialists. I also pointed out that, since English-speakers are a clear minority in Japan and there are other language minorities here, a two-language policy of Japanese and English would not eliminate the language bias problem or other forms of prejudice ( many of such problems stem from English's minority status in Japan, in fact).. In other words, Japanese was potentially as much a language of discrimination as Oda perceived English to be.
Oda then reponded in this manner: he attempted to paraphrase both Marshall's views and my views as one conflated set of views and called me too a linguicist and linguistic imperialist. What's more, he seemed upset that I would call him a linguicist and linguistic imperialist because I had pointed out how Japanese had been a language of colonial, imperialist aggression and oppression. I never directly accused Oda of being a linguicist or linguistic imperialist (the terms are far too problematic for me to fling them around like that). I was myself upset that I had to continue this debate just to defend myself from such misrepresentation and abuse in print. I think had the editor of TLT read the entire exchange upon receiving Oda's second response, that response would never have been published.
My concern now here is that Oda has created some sort of fiction in the pages of Braine's book because he was upset and embarassed over the exchange in TLT. I will try to find time to buy and read the book, and if I find that Oda has done something that approaches a one-sided, skewed version of the debate in JALT about language just to serve himself and some sort of personal vendetta, I will seek recourse in print and possibly with legal measures.
Labels:
George Braine,
JALT,
linguicism,
linguistic imperialism,
Masaki Oda
17 December 2009
ELT in Japan online publication now launched
Just to sum up the actions taken: I've decided to use the blog format to publish a 'practitioner journal' for those who teach EFL in Japan (and E. Asia).
The publication is located here:
http://eltinjapan.blogspot.com/
Here is the first issue's top feature:
http://eltinjapan.blogspot.com/2009/12/elt-j-issue-1-feature-article-ten.html
Here is a summary of what is in issue 1 and a list that links to all the articles:
http://eltinjapan.blogspot.com/2009/12/elt-in-japan-issue-1-december-2009.html
Here is a preview of future articles in the issues that will appear in 2010:
http://eltinjapan.blogspot.com/2009/12/preview-of-future-issuess-of-elt-j.html
The publication is located here:
http://eltinjapan.blogspot.com/
Here is the first issue's top feature:
http://eltinjapan.blogspot.com/2009/12/elt-j-issue-1-feature-article-ten.html
Here is a summary of what is in issue 1 and a list that links to all the articles:
http://eltinjapan.blogspot.com/2009/12/elt-in-japan-issue-1-december-2009.html
Here is a preview of future articles in the issues that will appear in 2010:
http://eltinjapan.blogspot.com/2009/12/preview-of-future-issuess-of-elt-j.html
TEFL FORUM: Issue 1 (December 2009) of 'ELT in Japan' (ELT-J)
http://eltinjapan.blogspot.com/2009/12/elt-in-japan-issue-1-december-2009.html
17 December 2009
'ELT in Japan', Issue 1 (December 2009)
These articles were first published at the related blog established prior to ELT-J, Japan Higher Education Outlook. . These have been compiled to form the first issue of ELT-J.
The entire collection can be navigated from this page:
http://japanheo.blogspot.com/2009/12/tefl-forum-so-far.html
>> ELT-J Issue 1 Table of Contents <<
1. Proposes a more useful model/basic unit of phonology for EFL.
http://japanheo.blogspot.com/2009/12/facially-salient-articulatory-gesture.html
2. Looks at a literacy and phonology 'crutch' often used by Japanese EFL learners and relates it to standard concepts in ELT and EFL literacy.
http://japanheo.blogspot.com/2009/12/do-japanese-efl-students-need-katakana.html
3. Sums up ten major reasons why TEFL and EFL are so problematic in Japan and at Japanese universities.
http://japanheo.blogspot.com/2009/12/tefl-forum-ten-reasons-why-english.html
4. This is an article that is conceptually related to the article in item #2 on this list but comes at the issues from a different angle--that is, positive transfer vs. negative interference from the native literacy backgrounds of EFL students.
http://japanheo.blogspot.com/2009/12/tefl-forum-native-writing-systems.html
5. Looks at why TEFL/ELT/TESOL need a new approach to 'theory' and 'practice', where real theory emerges from real practice.
http://japanheo.blogspot.com/2009/09/breaking-down-theory-vs-practice.html
6. Questions the value of most academic research on ELT and FLL (e.g., 'SLA' research).
http://japanheo.blogspot.com/2009/06/why-is-research-in-elttefltesolalsla-so.html
7. An earlier version of item #3 on this list. Gives a briefer overview of the ten reasons and links back to the individual articles in which they were discussed in more detail.
http://japanheo.blogspot.com/2009/04/ten-reaons-why-english-education-in.html
8. Gives an overview of the many issues foreign nationals (e.g., 'native speakers of English) face teaching at the level of higher education in Japan, including TEFL at this level.
http://japanheo.blogspot.com/2008/03/teaching-as-foreign-national-at.html
Labels: AL, EFL, ELT, SLA, teaching English in Japan, TEFL, TEFL Forum, TESOL
------------------
Future issues will include articles on teaching vocabulary, pronunciation, and writing multiple choice questions. .
Posted by CEJ at 11:23
Labels: AL, EFL, ELT, ELT in Japan, SLA, teaching English in Japan, Teaching in Japan, TEFL, TEFL Forum, TEFL in Japan, TESOL
17 December 2009
'ELT in Japan', Issue 1 (December 2009)
These articles were first published at the related blog established prior to ELT-J, Japan Higher Education Outlook. . These have been compiled to form the first issue of ELT-J.
The entire collection can be navigated from this page:
http://japanheo.blogspot.com/2009/12/tefl-forum-so-far.html
>> ELT-J Issue 1 Table of Contents <<
1. Proposes a more useful model/basic unit of phonology for EFL.
http://japanheo.blogspot.com/2009/12/facially-salient-articulatory-gesture.html
2. Looks at a literacy and phonology 'crutch' often used by Japanese EFL learners and relates it to standard concepts in ELT and EFL literacy.
http://japanheo.blogspot.com/2009/12/do-japanese-efl-students-need-katakana.html
3. Sums up ten major reasons why TEFL and EFL are so problematic in Japan and at Japanese universities.
http://japanheo.blogspot.com/2009/12/tefl-forum-ten-reasons-why-english.html
4. This is an article that is conceptually related to the article in item #2 on this list but comes at the issues from a different angle--that is, positive transfer vs. negative interference from the native literacy backgrounds of EFL students.
http://japanheo.blogspot.com/2009/12/tefl-forum-native-writing-systems.html
5. Looks at why TEFL/ELT/TESOL need a new approach to 'theory' and 'practice', where real theory emerges from real practice.
http://japanheo.blogspot.com/2009/09/breaking-down-theory-vs-practice.html
6. Questions the value of most academic research on ELT and FLL (e.g., 'SLA' research).
http://japanheo.blogspot.com/2009/06/why-is-research-in-elttefltesolalsla-so.html
7. An earlier version of item #3 on this list. Gives a briefer overview of the ten reasons and links back to the individual articles in which they were discussed in more detail.
http://japanheo.blogspot.com/2009/04/ten-reaons-why-english-education-in.html
8. Gives an overview of the many issues foreign nationals (e.g., 'native speakers of English) face teaching at the level of higher education in Japan, including TEFL at this level.
http://japanheo.blogspot.com/2008/03/teaching-as-foreign-national-at.html
Labels: AL, EFL, ELT, SLA, teaching English in Japan, TEFL, TEFL Forum, TESOL
------------------
Future issues will include articles on teaching vocabulary, pronunciation, and writing multiple choice questions. .
Posted by CEJ at 11:23
Labels: AL, EFL, ELT, ELT in Japan, SLA, teaching English in Japan, Teaching in Japan, TEFL, TEFL Forum, TEFL in Japan, TESOL
Labels:
AL,
EFL,
ELT,
SLA,
teaching English in Japan,
TEFL,
TEFL Forum,
TESOL
TEFL FORUM - Preview of future issues of ELT-J Online Magazine
These will also be announced in the TEFL Forum section of Japan Higher Education Outlook.
Preview of future issues of 'ELT-J Online Magazine':
1. Vocabulary activities for the conversation class.
2. Vocabulary activities for the writing class.
3. Teaching English /l/ vs. /r/ (applied phonology).
4. Introducing a different sort of audio-visual electronic dictionary for FL learning.
5. Variations of the multiple-choice vocabulary question for FL practice and assessment.
All these and more are under development for publication at ELT-J.
Preview of future issues of 'ELT-J Online Magazine':
1. Vocabulary activities for the conversation class.
2. Vocabulary activities for the writing class.
3. Teaching English /l/ vs. /r/ (applied phonology).
4. Introducing a different sort of audio-visual electronic dictionary for FL learning.
5. Variations of the multiple-choice vocabulary question for FL practice and assessment.
All these and more are under development for publication at ELT-J.
16 December 2009
Japan's Government Says "Institutional Bottlenecks" Hinder Science Plan
Perhaps this helps explain why there are no Japanese universities in the THES-QS top 20 in 2009? For example, the difficulties around foreign personnel who teach and conduct research continue. Also, the report cites an innovation--fixed term positions--as an improvement but if anything such contractual posts have helped contribute to instability in research and unfair treatment of foreign personnel, women and junior colleagues.
http://www.mext.go.jp/english/wp/1260270.htm
White Paper on Science and Technology 2008 (Provisional Translation)
http://www.mext.go.jp/component/english/__icsFiles/afieldfile/2009/04/23/1260307_1.pdf
excerpt:
Elimination of Institutional Bottleneck to Dissemination of S&T Outcome
to Society
To create S&T-based innovation, it is necessary to ensure that research results achieved at
universities and other research institutions be steadily disseminated to society. Active exchange of researchers, smooth implementation of research activities, industry-academia-government cooperation, etc. have great effects not only on activation of R&D but on the return of research results to society and are the keys for enhancing effects of human and material investment on S&T. In order to realize this, approaches for elimination of institutional bottleneck in various aspects, such as the research exchange system, fixed-term system for researchers, independent administrative institution system, national university corporation system, and the intellectual property system, were performed to obtain significant progress. However, it is often said that there still exist institutional bottlenecks: emigration and immigration management of foreign researchers; working environment of female researchers who are involved in childbirth and child rearing; treatment of retirement allowance associated with movement between the research institutions; and fund procurement environment of research institutions have been identified. Among those, it is very important that elimination of institutional bottlenecks relating to clinical research involving clinical trials is pointed out. In our country encountering an aging society with a declining birth rate, which is the fastest in the world, clinical research involving clinical trials is the R&D means for realizing innovation leading to health enhancement of our nation and activation of the research is considered to bring great national benefits. It is essential to ensure that Japanese nationals can have earlier access to the world’s most advanced medical technologies; that the Japanese medical industry can aggressively pursue R&D activities and sharpen its international competitive edge; and that the health of the nation will further improve by
eliminating institutional bottlenecks obstructing research activities and promoting clinical research
involving clinical trials.
http://www.mext.go.jp/english/wp/1260270.htm
White Paper on Science and Technology 2008 (Provisional Translation)
http://www.mext.go.jp/component/english/__icsFiles/afieldfile/2009/04/23/1260307_1.pdf
excerpt:
Elimination of Institutional Bottleneck to Dissemination of S&T Outcome
to Society
To create S&T-based innovation, it is necessary to ensure that research results achieved at
universities and other research institutions be steadily disseminated to society. Active exchange of researchers, smooth implementation of research activities, industry-academia-government cooperation, etc. have great effects not only on activation of R&D but on the return of research results to society and are the keys for enhancing effects of human and material investment on S&T. In order to realize this, approaches for elimination of institutional bottleneck in various aspects, such as the research exchange system, fixed-term system for researchers, independent administrative institution system, national university corporation system, and the intellectual property system, were performed to obtain significant progress. However, it is often said that there still exist institutional bottlenecks: emigration and immigration management of foreign researchers; working environment of female researchers who are involved in childbirth and child rearing; treatment of retirement allowance associated with movement between the research institutions; and fund procurement environment of research institutions have been identified. Among those, it is very important that elimination of institutional bottlenecks relating to clinical research involving clinical trials is pointed out. In our country encountering an aging society with a declining birth rate, which is the fastest in the world, clinical research involving clinical trials is the R&D means for realizing innovation leading to health enhancement of our nation and activation of the research is considered to bring great national benefits. It is essential to ensure that Japanese nationals can have earlier access to the world’s most advanced medical technologies; that the Japanese medical industry can aggressively pursue R&D activities and sharpen its international competitive edge; and that the health of the nation will further improve by
eliminating institutional bottlenecks obstructing research activities and promoting clinical research
involving clinical trials.
Profile of Japan's top university--U. of Tokyo (Toudai)
Recently Toudai dropped out of the THES-QS top 20 ranking of universities worldwide. I thought this would be a good time to run an alternative version of an earlier piece on the University of Tokyo.
---------------------------
It is unique and elite
When the university system of Japan is compared internationally, one institution is most often cited as Japan's best example of a 'world-class' university. This is, of course, the University of Tokyo (Toukyou Daigaku' or 'Toudai' for short). Toudai is perhaps most famous for graduating and networking elite bureaucrats and politicians, including prime ministers; however, the supposed lock on leadership in top government has waned over the past two decades. For example, this century's most popular prime minister of Japan, Junichiro Koizumi, and many of his advisors were graduates of the private elite Keio University.
Historically speaking, Toudai has the unique distinction of belonging to two important groups of universities: First, it was established almost at the very start of the imperial restoration in 1877 as the 'national university' of Japan. A decade later it became the top institution of a group referred to collectively as the imperial universities ('teikoku daigaku') run by the national government in Japan, but extended to Korea and Taiwan as well. These eventually formed a system of nine institutions which gave all their respective countries or Japanese regions their top institutions.
Second, University of Tokyo is the foremost (and only national-public) member of Japan's 'Ivy League' of six elite Tokyo institutions (the other five being the private universities of Waseda, Keio, Housei, Meiji, and the Christian Rikkyou). Like its five private elite counterparts, Toudai's traditions and its reputation go back to the Meiji Restoration, a tumultuous period of forced westernization and development. Interestingly, one tradition did not survive the transition to modernity: foreign academics used to comprise the majority of the teaching and research faculty at the national universities during the Meiji and Showa eras, whereas now, in the era of internationalization and globalization, they are but a small percentage.
Toudai joins a third group of institutions in the mass era of HE in Japan
In the post-second world war era, stemming from the Occupation's reform of Japanese university education in 1949 but also in response to the needs of the development state, Toudai has flourished at the pinnacle of an expanded, much less elite national university system, which had grown to a total of nearly 100 institutions. However, after a period of top-down administrative reforms and forced mergers starting in the 1990s, the 87 remaining national universities were given new corporate charters and a considerable degree of administrative independence in April 2004.
Big, diverse and diversified by Japanese university standards
The newly corporatised University of Tokyo consists of three campuses, all in the Kanto region (with other facilities scattered about Tokyo and other parts of Japan). It has a total enrolment of around 28,000 students (quite large for Japan) coming from top senior high schools (and cram schools) from all over the country. Toudai also plays host to 2,100 of Japan's population of 120,000 international students (the largest total in Japan, with Chinese nationals forming the single largest group). Toudai has a faculty of 2,800 professors, associate professors, and lecturers. Annually some 2,200 foreign researchers visit for short and long periods of exchange. Toudai is a co-educational, multi-disciplinary university with a comprehensive range of taught programs, post-graduate research, and professional schools (such as its legendary law school).
Perhaps this enormous diversity in faculties, disciplines, and programmes reveal the university as a 'jack of all trades, but master of none' when compared to more focused, dynamic institutions. Like its cross-town rival, Waseda, Toudai's most famous and revered alumni are perhaps its literary figures, such as Soseki Natsume, Yukio Mishima, Kobo Abe, and Nobel winners, Yasunari Kawabata and Kenzaburo Oe. And, although the university can claim some top prizes in science and technology, the Toudai academic who last won a Nobel, Masatoshi Koshiba in 2002, did so for work on cosmic neutrino detection done in the 1980s. Kyoto University has more top prizes in science, including Nobels, and Tohoku University and University of Tsukuba are both widely considered to be more on the cutting edge in many important areas of scientific research.
Waves of reform in university education during the 1990s brought about sweeping changes in curriculum, graduate education, doctoral level research, and national university administration. The better managed and financed of Toudai's rivals took aim at beating the institution in certain niches, such as in the establishment of western style professional schools in law, business and accounting. Moreover, with their ties to industry and manufacturing, institutions such as the Tokyo Institute of Technology and the Nagoya Institute of Technology have emerged as more effectively focused on science and technology with actual commercial applications.
The University of Tokyo has tried to meet such challenges through the establishment of new graduate schools of an interdisciplinary or innovative nature: Frontier Sciences, Interdisciplinary information Studies, Information Science and Technology, and Public Policy. Because of the extent of direct subsidy from the national government (either in bloc grants or through competition), the university also hosts some top research institutes, such as its Institute of Medical Science, Earthquake Research Institute, Institure for Cosmic Ray Research, and the Institute for Solid State Physics.
Where Toudai outdoes the West, it is largely unknown in the West
Perhaps where Toudai is most on the cutting edge of science and technology, its people and their accomplishments are also the most unheralded. One very good example of this is Prof. Ken Sakamura's TRON OS project launched in 1984. The successful development of TRON was initially hampered by trade concerns under the US's Super 301 Trade Law (which largely kept it off personal computers because of a fear of reprisals) but was held back also by a lack of cooperation amongst competing companies unused to working together on an 'open source' project.
In its most successful manisfestations, TRON is an embedded operating system running such ubiquitious devices as mobile phones, fax machines, kitchen appliances, car navigation systems, etc. It is estimated that some form of TRON now runs over 3 billion such electronic appliances worldwide, making it the world's most popular (but still largely unknown) OS. Because of the speed advantages in TRON computing, even proponents of Linux and Windows computing are now working with the TRON project to produce portable, personal computing and communications devices and hybrid operating systems--the software that will run the hardware that will be required in the coming age of ubiquitous, fully networked computing. TRON, either by itself or in combination with Linux, could also play a major role in a recently announced pan-Asian effort to create an open-source OS for personal computing and the internet.
It would seem that it is with computers that Toudai continues to excel. About the time the TRON OS was really taking off as the embedded OS for Japanese electronic appliances, another research group at the University of Tokyo started the MD Grape project in order to design computer chips for supercomputers doing calculations in astrophysics. Researchers at Riken, a super group of national research institutes and centers in Japan, have adapted Toudai's MD Grape chip for applications in life sciences and molecular dynamics. Meanwhile, research at the University of Tokyo continues in order to develop a more general purpose chip capable of an incredible one trillion plus calculations per second.
-------------------------
See also:
http://www.topuniversities.com/
http://www.topuniversities.com/university-rankings/world-university-rankings/2009/results
---------------------------
Is University of Tokyo Japan's only world-class university?
Charles Jannuzi, University of Fukui, Japan
It is unique and elite
When the university system of Japan is compared internationally, one institution is most often cited as Japan's best example of a 'world-class' university. This is, of course, the University of Tokyo (Toukyou Daigaku' or 'Toudai' for short). Toudai is perhaps most famous for graduating and networking elite bureaucrats and politicians, including prime ministers; however, the supposed lock on leadership in top government has waned over the past two decades. For example, this century's most popular prime minister of Japan, Junichiro Koizumi, and many of his advisors were graduates of the private elite Keio University.
Historically speaking, Toudai has the unique distinction of belonging to two important groups of universities: First, it was established almost at the very start of the imperial restoration in 1877 as the 'national university' of Japan. A decade later it became the top institution of a group referred to collectively as the imperial universities ('teikoku daigaku') run by the national government in Japan, but extended to Korea and Taiwan as well. These eventually formed a system of nine institutions which gave all their respective countries or Japanese regions their top institutions.
Second, University of Tokyo is the foremost (and only national-public) member of Japan's 'Ivy League' of six elite Tokyo institutions (the other five being the private universities of Waseda, Keio, Housei, Meiji, and the Christian Rikkyou). Like its five private elite counterparts, Toudai's traditions and its reputation go back to the Meiji Restoration, a tumultuous period of forced westernization and development. Interestingly, one tradition did not survive the transition to modernity: foreign academics used to comprise the majority of the teaching and research faculty at the national universities during the Meiji and Showa eras, whereas now, in the era of internationalization and globalization, they are but a small percentage.
Toudai joins a third group of institutions in the mass era of HE in Japan
In the post-second world war era, stemming from the Occupation's reform of Japanese university education in 1949 but also in response to the needs of the development state, Toudai has flourished at the pinnacle of an expanded, much less elite national university system, which had grown to a total of nearly 100 institutions. However, after a period of top-down administrative reforms and forced mergers starting in the 1990s, the 87 remaining national universities were given new corporate charters and a considerable degree of administrative independence in April 2004.
Big, diverse and diversified by Japanese university standards
The newly corporatised University of Tokyo consists of three campuses, all in the Kanto region (with other facilities scattered about Tokyo and other parts of Japan). It has a total enrolment of around 28,000 students (quite large for Japan) coming from top senior high schools (and cram schools) from all over the country. Toudai also plays host to 2,100 of Japan's population of 120,000 international students (the largest total in Japan, with Chinese nationals forming the single largest group). Toudai has a faculty of 2,800 professors, associate professors, and lecturers. Annually some 2,200 foreign researchers visit for short and long periods of exchange. Toudai is a co-educational, multi-disciplinary university with a comprehensive range of taught programs, post-graduate research, and professional schools (such as its legendary law school).
Perhaps this enormous diversity in faculties, disciplines, and programmes reveal the university as a 'jack of all trades, but master of none' when compared to more focused, dynamic institutions. Like its cross-town rival, Waseda, Toudai's most famous and revered alumni are perhaps its literary figures, such as Soseki Natsume, Yukio Mishima, Kobo Abe, and Nobel winners, Yasunari Kawabata and Kenzaburo Oe. And, although the university can claim some top prizes in science and technology, the Toudai academic who last won a Nobel, Masatoshi Koshiba in 2002, did so for work on cosmic neutrino detection done in the 1980s. Kyoto University has more top prizes in science, including Nobels, and Tohoku University and University of Tsukuba are both widely considered to be more on the cutting edge in many important areas of scientific research.
Waves of reform in university education during the 1990s brought about sweeping changes in curriculum, graduate education, doctoral level research, and national university administration. The better managed and financed of Toudai's rivals took aim at beating the institution in certain niches, such as in the establishment of western style professional schools in law, business and accounting. Moreover, with their ties to industry and manufacturing, institutions such as the Tokyo Institute of Technology and the Nagoya Institute of Technology have emerged as more effectively focused on science and technology with actual commercial applications.
The University of Tokyo has tried to meet such challenges through the establishment of new graduate schools of an interdisciplinary or innovative nature: Frontier Sciences, Interdisciplinary information Studies, Information Science and Technology, and Public Policy. Because of the extent of direct subsidy from the national government (either in bloc grants or through competition), the university also hosts some top research institutes, such as its Institute of Medical Science, Earthquake Research Institute, Institure for Cosmic Ray Research, and the Institute for Solid State Physics.
Where Toudai outdoes the West, it is largely unknown in the West
Perhaps where Toudai is most on the cutting edge of science and technology, its people and their accomplishments are also the most unheralded. One very good example of this is Prof. Ken Sakamura's TRON OS project launched in 1984. The successful development of TRON was initially hampered by trade concerns under the US's Super 301 Trade Law (which largely kept it off personal computers because of a fear of reprisals) but was held back also by a lack of cooperation amongst competing companies unused to working together on an 'open source' project.
In its most successful manisfestations, TRON is an embedded operating system running such ubiquitious devices as mobile phones, fax machines, kitchen appliances, car navigation systems, etc. It is estimated that some form of TRON now runs over 3 billion such electronic appliances worldwide, making it the world's most popular (but still largely unknown) OS. Because of the speed advantages in TRON computing, even proponents of Linux and Windows computing are now working with the TRON project to produce portable, personal computing and communications devices and hybrid operating systems--the software that will run the hardware that will be required in the coming age of ubiquitous, fully networked computing. TRON, either by itself or in combination with Linux, could also play a major role in a recently announced pan-Asian effort to create an open-source OS for personal computing and the internet.
It would seem that it is with computers that Toudai continues to excel. About the time the TRON OS was really taking off as the embedded OS for Japanese electronic appliances, another research group at the University of Tokyo started the MD Grape project in order to design computer chips for supercomputers doing calculations in astrophysics. Researchers at Riken, a super group of national research institutes and centers in Japan, have adapted Toudai's MD Grape chip for applications in life sciences and molecular dynamics. Meanwhile, research at the University of Tokyo continues in order to develop a more general purpose chip capable of an incredible one trillion plus calculations per second.
-------------------------
See also:
http://www.topuniversities.com/
http://www.topuniversities.com/university-rankings/world-university-rankings/2009/results
TEFL FORUM SO FAR
Here is a list (with links) of some of the TEFL-related articles to appear at JHEO.
They are listed from most recent to past. The TEFL Forum here at JHEO will then move on to articles on teaching pronunciation and vocabulary.
TEFL FORUM SUMMARY
1. Proposes a more useful model/basic unit of phonology for EFL.
http://japanheo.blogspot.com/2009/12/facially-salient-articulatory-gesture.html
2. Looks at a literacy and phonology 'crutch' often used by Japanese EFL learners and relates it to standard concepts in ELT and EFL literacy.
http://japanheo.blogspot.com/2009/12/do-japanese-efl-students-need-katakana.html
3. Sums up ten major reasons why TEFL and EFL are so problematic in Japan and at Japanese universities.
http://japanheo.blogspot.com/2009/12/tefl-forum-ten-reasons-why-english.html
4. This is an article that is conceptually related to the article in item #2 on this list but comes at the issues from a different angle--that is, positive transfer vs. negative interference from the native literacy backgrounds of EFL students.
http://japanheo.blogspot.com/2009/12/tefl-forum-native-writing-systems.html
5. Looks at why TEFL/ELT/TESOL need a new approach to 'theory' and 'practice', where real theory emerges from real practice.
http://japanheo.blogspot.com/2009/09/breaking-down-theory-vs-practice.html
6. Questions the value of most academic research on ELT and FLL (e.g., 'SLA' research).
http://japanheo.blogspot.com/2009/06/why-is-research-in-elttefltesolalsla-so.html
7. An earlier version of item #3 on this list. Gives a briefer overview of the ten reasons and links back to the individual articles in which they were discussed in more detail.
http://japanheo.blogspot.com/2009/04/ten-reaons-why-english-education-in.html
8. Gives an overview of the many issues foreign nationals (e.g., 'native speakers of English) face teaching at the level of higher education in Japan, including TEFL at this level.
http://japanheo.blogspot.com/2008/03/teaching-as-foreign-national-at.html
They are listed from most recent to past. The TEFL Forum here at JHEO will then move on to articles on teaching pronunciation and vocabulary.
TEFL FORUM SUMMARY
1. Proposes a more useful model/basic unit of phonology for EFL.
http://japanheo.blogspot.com/2009/12/facially-salient-articulatory-gesture.html
2. Looks at a literacy and phonology 'crutch' often used by Japanese EFL learners and relates it to standard concepts in ELT and EFL literacy.
http://japanheo.blogspot.com/2009/12/do-japanese-efl-students-need-katakana.html
3. Sums up ten major reasons why TEFL and EFL are so problematic in Japan and at Japanese universities.
http://japanheo.blogspot.com/2009/12/tefl-forum-ten-reasons-why-english.html
4. This is an article that is conceptually related to the article in item #2 on this list but comes at the issues from a different angle--that is, positive transfer vs. negative interference from the native literacy backgrounds of EFL students.
http://japanheo.blogspot.com/2009/12/tefl-forum-native-writing-systems.html
5. Looks at why TEFL/ELT/TESOL need a new approach to 'theory' and 'practice', where real theory emerges from real practice.
http://japanheo.blogspot.com/2009/09/breaking-down-theory-vs-practice.html
6. Questions the value of most academic research on ELT and FLL (e.g., 'SLA' research).
http://japanheo.blogspot.com/2009/06/why-is-research-in-elttefltesolalsla-so.html
7. An earlier version of item #3 on this list. Gives a briefer overview of the ten reasons and links back to the individual articles in which they were discussed in more detail.
http://japanheo.blogspot.com/2009/04/ten-reaons-why-english-education-in.html
8. Gives an overview of the many issues foreign nationals (e.g., 'native speakers of English) face teaching at the level of higher education in Japan, including TEFL at this level.
http://japanheo.blogspot.com/2008/03/teaching-as-foreign-national-at.html
Labels:
AL,
EFL,
ELT,
SLA,
teaching English in Japan,
TEFL,
TEFL Forum,
TESOL
TEFL FORUM: The Facially Salient Articulatory Gesture as a Basic Unit for Applied Phonology in ELT
The Facially Salient Articulatory Gesture as a Basic Unit for Applied Phonology in ELT
Charles Jannuzi, University of Fukui, Japan
Introduction
This paper summarizes the analysis and interpretation of the results of two electromyographic procedures in experimental phonology. The results of electromyographic experiments have been interpreted and analyzed using concepts and theory from linguistics, applied linguistics, and phonology, specifically articulatory phonology. The first electromyographic procedure on one native speaker of English obtained data on the consonant sounds of English. The second electromyographic procedure was used to explore the large vowel system of English.
Based on the results of these experiments, we propose a new theory about the basic sub-lexical unit of speech production and perception. This paper posits a new, discrete, invariant, psychological unit of phonology that functions below the level of word meaning to organize language. This model is a variation of the articulatory gesture of articulatory phonology and phonetics, and it has implications and applications relevant in many areas of applied linguistics and language education, including native language arts, second and foreign language learning, and literacy. In order to contrast the new concept with the previously established concepts of the 'phoneme' and 'feature', we will call the new phonological prime the 'visual articulatory gesture', or, alternatively, it can be referred to as the 'facially salient articulatory gesture'. The advantage of this new basic sub-lexical unit in phonology--and as a model for applied phonology in support of TEFL--is not merely the need in linguistics, applied linguistics and educational linguistics for an abstract model that makes better phonetic and psychological sense. Rather, we feel strongly that any model more true of linguistic and psychological reality will yield better concepts, principles and practices for the classroom and materials.
The theory that emerges from our research helps to solve the problem of the lack of phonetic realism that plagues structuralist, behaviorist and formalist accounts of the phonology of a language in actual acquisition and then communicative use (production and perception). In part, this model of phonology is based on a view of language as a learning system that builds up to a learned, stable state of functional complexity (that is, the flow from language acquisition and learning to fluent use of a language to learn and communicate).
The 'learning to learn' stage involves necessary and sufficient inputs and feedback from visual, acoustic-phonetic and kinesthetic signals. We call the most basic, sub-lexical, phonological unit of this model (and indeed all language use) the 'articulatory gesture'. However, unlike previously established conceptualizations of the term 'articulatory gesture', which never really address what is meant by the term 'gesture', our basic sub-lexical unit involves 'faciality' or 'facial salience' in the visual and physiological components.
In this way we clarify why articulatory gestures are gestural in a linguistic sense and can help account for rapid, reduced, connected, co-articulated speech. Unlike the descriptively simplistic but non- explanatory abstractions of the phoneme or feature, articulatory gestures ARE NOT merely formalizations of repetitious, sequenced movements of articulators tracked at prominent points of articulation. Rather, the articulatory gesture as a unit of phonology helps models psychological control of both language production and perception. For a schematic overview of the articulatory gesture with the previously established analogues (see Figure One, link to graphic below).
Hyperlink to Figure One Graphic.
Legacy concept: the structuralist phoneme
This term is perhaps most often defined and thought of in linguistics and language teaching as the smallest sound unit to create lexical contrasts in a language. For example, we might posit the existence of the /b/ phoneme in 'bat' or 'bin' if we contrast them with the rhyming words 'at' or 'in' and see that these words differ from the former by the absence or presence of one consonant phoneme. Or we might isolate a vowel contrast by placing 'bet' alongside 'bit', thereby helping us to distinguish between the vowels /e/ and /i/. There is something troubling, however, about the need for using words or lexical level meaning to help us define or determine what sub-lexical and even sub-syllabic sound segments are. Moreover, we have to think of phonemes as idealized or psychological categories of sounds, not actual instantiations of sound categories. This is done to the point where phonemes subsist as mentalist or social super-structuralist objects subsisting in some non-material realm, shorn entirely of their phonetic identities. Another aspect to consider is just where in words phonemes can occur--that is be instantiated. We might think of the nasal- velar sound at the end of the word 'ring' as an example of a phoneme of English, but the distribution rules for that phoneme in English determine that such a sound is not possible at the beginning of a syllable or word.
Unfortunately, overall, the 'phoneme' does not hold up to any close linguistic scrutiny of how languages are actually spoken, conveyed through the air as sound, received as audible material, and then perceived or integrated into linguistic understanding and memory. Real speech doesn't naturally segment--we don't speak in discrete blips of Morse code. We can artificially segment spoken English, but the 'sounds' you will encounter will far exceed the 44-48 inventories phonemic accounts always give. The phoneme cannot be found in the mouth; it cannot be found in the air coming out of the mouth; it cannot be found in the air going to someone's ear; and it cannot be found in someone's ear. So then we are supposed to believe it is a socio-structuralist or psychologically real object, in which case we hardly need phonetic criteria to delimit it. And this is why phonetic analysis of phonemes always flounders on phonological nonsense or at least phonetic oversimplifications, if not all-out contradictions.
Take for example two of the most common types of sounds in English--indistinct, neutralized vowels of very low productive intensity (such as unstressed /i/ and schwa) and glottal consonants, which are articulated at the extreme ends of the vocal tract (the glottis and front of the mouth). How should we phonemically interpret the schwa? Is it phonemically speaking the most common vowel sound of English and a category in its own right, or is it so common because it is an unstressed allophone of so many other vowels? Why should so many distinct vowels converge on the same sound for an unstressed allophone? Or what about geminate glottal consonants (such as the glottal /t/ we might find in the middle of the word cattail)? Phonemic accounts of the schwa or the glottal geminate might say that they are phonemes in their own right and that when they alternate with other sounds, these are processes of morphophonemic alternation. However, what if we said the schwa is just a phonetic variant of most of the vowels of English? And the glottal geminate a variation of English consonant stops. After all, there is enough phonetic similarity to make the case.
Other difficulties of interpretation and explanation abound. How should we phonemically interpret vow els in languages with diphthongs and triphthongs? How should we actually phonemically interpret the 'ng' sound(s) at the end of 'ring' or 'sing'? Native intuition is that they are two sounds or sub-syllabic elements, such as two concurrent, distinct but overlapping features (which is why no one has a real problem with the digraph of the orthography). But you could put these words into minimal pairs and come up with all sorts of contrasts. Ring vs. rig, sing vs. sin, etc. One phonemic account might make the sound fall under the same category as 'ng' because the 'ng' sound is the opposite of the 'ng' and comes only at the beginning of a word or syllable--except the failure to meet any criterion of phonetic similarity might be invoked. Or how should we treat the in-/im- prefix? Is the prefix of 'inert' and 'immobile' different in its forms because of morphophonemic variation or could one argue that either the /n/ or the phonetically similar /m/ is actually an allophonic variation of a phoneme? The phonemic model for teaching and learning a foreign language's phonology predominates and is largely a formal model inherited from structuralism. Even if we supplement or supplant the idea of phonemic segments (segmentals) with suprasegmentals (e.g., intonation), the basic idea still centers learning pronunciation on the perception of arbitrary, social-systemic contrasts enabling an individual as language user or learner to understand spoken language.
However, it is impossible to locate the phoneme or contrastive segment in articulation, acoustics in physical space, or in reception and immediate analysis of the speech signal. This delimits it, if it exists at all, as a largely inaccessible and overly abstract, logical category taken away from actual speech and the psychological control of the vocal tract. Such opaque, black box concepts do not transfer well to the classroom, where effective simplification is necessary for teaching and presentation to have an impact on language learners.
One might ask of the phoneme, if we do not say it and cannot naturally find it in the speech signal, why do we think it actually exists in language? It could be conjectured that the concept of the phoneme is actually an metalinguistic artifact of psychological perception--a super category imposed on sounds and the vocal tract--stemming from linguistic insights about meaningful language use or even literacy in an alphabetic language. We could also argue that some sort of concept of the phoneme is a convenient fiction which allows us to refer more consistently to key points and manners of articulation in written language than does standard English spelling, which is more geared toward preserving etymological relationships across inflected and derived forms.
Legacy Concept: the Contrastive Feature
Early on in structuralist approaches to phonology (and then later, transformational ones as well) another idea was posited that supplemented in more detail the earlier concept of the contrastive phoneme--that of the contrastive feature. Phonemes, it was theorized, could be broken down further into distinctive features. For example, whether or not they are phonemically contrastive, phonetically speaking, what typically sepa rates a language's /t/ from its /d/ is the feature of vocalization. Or, another example is how voice and a lack of aspiration might distinguish an initial /p/ from an initial /b/. One difficulty of breaking phonemes down into features, however, is that the aspects of speech that have been called features are a confusing mix of psychological, articulatory-gestural, phonetic and acoustic phenomena.
Typical notions of features move back and forth between articulatory criteria (a point or manner of articulation in the vocal tract or respiratory tract or an acoustic effect found on an oscilloscope). Is something a feature only if a listener hears it or could a feature be something that is physiologically experienced and subsequently anticipated by the speaker? A second problem is that, as described in much discourse, they are not truly sub-syllabic; at least phonetically and acoustically speaking, features demonstrably spread over whole syllables, words and even word boundaries. Features, then, if we actually break up speech in order to demonstrate their existence, are supposed to work more like the various notes of a chord either struck almost simultaneously or plucked out in quick succession but sustained and stretched over an entire bar (in this case syllables and syllable sequences) to create harmonies and dissonance.
Evolution of language as gestural in nature
The human ability socially to convey thoughts, intentions, emotions, beliefs, and culturally bound ways of living largely depends on the structured use of language. This cognitively controlled, structured system for communication, we contend, evolved first as a visual-gestural system of body language quite analogous to the sign language of the deaf in use today. That is, we are talking about a gestural language that involves not only the hands and arms, but also movements of the muscles of the face to produce a form of controlled speech that is more reliant on the visual conveyance of information than the acoustic mode. The full development of human language as we now know it, however, overlapped with the emergence of considerable auditory and phonetic abilities crucial to the survival of the human species. These beneficial auditory and phonetic talents also took on communicative functions contributing to the survival and adaptation of the species.
Over time the visual-gestural system of language converged with the auditory and phonetic powers to produce what we know today as the human language facility. It might make more sense to view the auditory and phonetic aspects of human language as dominant over the visual and gestural ones. Also, not all visual-gestural aspects of communication are linguistic in nature, though many can be specific to particular groups and cultures. However, it might well be the case that visual and gestural abilities are still more integral to the psychology of language control and acquisition. For example, the use of gesture is two-part. It provides a visual signal for someone at the receiving end, but the person producing the gesture also experiences it physiologically.
The ability to communicate with a human language depends essentially on a psychologically controlled, coordinated speech and auditory system for the planned production and meaningful perception of language. It should be pointed out that speech production itself depends on a convergence of more basic systems, such as the ones, which hear, breathe, eat, and make non-linguistic noise. And hearing has a more basic non-linguistic role enabling humans to distinguish and make a phonetically diverse set of noises for communication, such as signaling and sound camouflage. Human language, however, has more essential aspects to it than speaking and hearing perception. It also involves visual and kinesthetic aspects and structural complexity that ranges from the phonological to the lexical to the syntactical. The visual and kinesthetic aspects of phonology, however, quite likely play an even larger role in language acquisition than they do in mature, fluent, native language use for everyday communication.
A new, more pedagogically useful phonological prime proposed
In lieu of the previously established concepts of the phoneme and feature, we call the most basic, sub-lexical, phonological unit of speech production and processing the 'articulatory gesture'. However, unlike previous conceptualizations of this term (for example, Browman & Goldstein, 1992), our basic sub-lexical unit involves 'faciality' or 'facial salience' in order to explain how a unit of speech can function as a linguistic 'gesture'. It must be noted here, though, that which level of language should be used to interpret speech production and processing remains a theoretically undecided issue. Does the articulatory gesture map onto language at a sub-lexical level (such as the syllable or mora)? Or does the articulatory gesture actually correspond in an explanatory manner to the spoken and psychological level of word meaning--that is, the allomorph and morpheme? If the reality of the latter case holds, then morpho-phonology would assume primary importance in any research program. Clearly more conceptual, theoretical and experimental undertakings are required for this issue to begin to be resolved.
Using linguistic analysis and interpretation of the results of two experiments in electromyography, we propose a new theory of phonology concerning the basic unit of sub-lexical language. Modern phonology has long sought a basic, psychological, discrete, invariable unit of language subsisting beneath a word level in order to closely model, describe and explain language acquisition, processing, perception and expression. That is, in order to solve the age-old problem of 'how infinite use is made of finite means', phonological inquiry needs a basic, stable, sub-lexical unit that works across all aspects of a language and across all speakers of a language. Such a unit is not only deduced to exist because speech can be segmented into consonants and vowels that form syllables. This could simply reflect a phonetic reality of speech that has been analyzed by linguists. Rather, a phonological prime subsisting at a sub-lexical level of language must also function as a unit in the mental language planning stage that controls meaningful language use.
Some Implications for Applied Linguistics and Educational Linguistics
Such a psychologically, physiologically and phonetically realistic basic unit for phonology should yield better teaching and learning materials (including software) in applied, practical and clinical areas such the following: (1) foreign language teaching and learning, (2) speech therapy, and (3) learning disabilities, such as reading and text processing disorders. This approach should also have important implications for the development of speech recognition for automated word processing, language translation and artificial intelligence.
There are available in applied linguistics various approaches to studying, describing, analyzing and explaining the production, transmission and perception of a language's phonology. However, one crucial problem is turning it into useful information for second or foreign language pedagogy. Phonological concepts and terminology for teaching often seem overly complex and abstract--if not outright contradictory-to both teachers and students. One possible reason for this perceived difficulty is that, in fact, many of the terms and models used to teach phonology simply are not useful for adolescents and adults learning a phonology. On the one hand, the meaning of terms in phonological discourse comes to seem opaque to students and even the teachers attempting to explain and demonstrate them. On the other hand, the concepts are too simplistic and static to do justice to the phonological, phonetic and physiological complexity that a learner must deal with in mastering a second or foreign language's phonology.
The articulatory gesture and its implications for language learning
A third approach to FL pronunciation and phonology would be to appeal broadly but coherently to those aspects of speech that apply to the phonetics, physiology as well as psychology of speech. Rather than being determined through analysis of static, binary contrasts, the sub-syllabic units of speech are deduced to exist and represented through dynamic descriptions of a complex of movements occurring in the vocal tract, mouth and facial muscles. This is called an articulatory-gestural approach or articulatory phonology. According to this approach, the basic units of phonological contrast are gestures, which are also abstract characterizations of articulatory events, each with an intrinsic time or duration. Utterances are modeled as organized patterns (constellations) of features, in which gestural units may overlap in time. The phonological structures defined in this way provide a set of articulatorily based natural classes. Moreover, the patterns of overlapping organization can be used to specify important aspects of the phonological structure of particular languages, and to account, in a coherent way and general way, for a variety of different types of phonological variation. Such variation includes allophonic variation and fluent speech alternations, as well as 'coarticulation' and speech errors. Finally, it is suggested that the gestural approach clarifies our understanding of phonological development, by positing that prelinguistic units of action are harnessed into (gestural) phonological structures through differentiation and coordination. (Browman & Goldstein, 1992, p. 155)
Although not well known or understood in FLT and FLL, an articulatory-gestural approach to phonology (or articulatory phonology) may well hold out the most promise for reuniting pronunciation practice with communicative language teaching and learning. One problem with any theory that seeks to explain how language is spoken because of what the tongue wants (ease of articulation) is that it might not take into account what the ears easily hear. A language user's vocal tract that has to repeat itself with emphasis actually ultimately does more work. Spoken language as a system built on give-and-take communication is pushed and pulled between the needs of the speaker and the listener (just as writing systems have to fit the needs of those who write and those who read). Going toward language that is rather indistinct, lacks intensity and overlaps sounds (co-articulated segments, super-syllabic features, reduced vowels, glottal consonants) might make it faster for the speaker, but this only has to be optimized to the level of how fast a listener (as language user) can take it in (which has physical limits). A rate of output beyond the point of what a listener can perceive does not contribute to the efficiency of production or reception since it would cause a breakdown in communication.
Different languages, dialects, and accents arrive at different sets/constellations of articulatory gestures (or articulatory routines) to get the job done. If there is considerable overlap of grammar and lexicon, then mutually intelligible forms of languages can exist, despite quite a bit of variation in how things are pronounced.
Any number of ways could arrive at basically the same speed of output for optimum reception and would be well below our maximum speed of output if our lazy tongues ruled our heads. But what would be the point of being able to speak so fast and indistinctly that no one could understand you?
A facially prominent (visually salient) account of articulatory gestures
Following the example established with the electromyographic analysis and pedagogical recommendations of Koyama, Okamoto, Yoshizawa, and Kumamoto (1976), we propose to take prior accounts of the articulatory gesture and modify and simplify their focus for the purposes of L2 pedagogy and learning. The rationale for this is, in part, based on our understanding of both language evolution and natural language development in individuals. One possible way to account for the human ability to make meaning systematically is to see the human vocal ability with language as a fortuitous adaptation of our respiratory, upper digestive and auditory tracts that extends our ability to gesture semiotically. The face, however, is a transitional area that serves a role both in the vocal apparatus and in the purely visual-gestural system. Indeed, with the face's and mouth's exterior as an interface or transitional zone, it could be said that the vocal apparatus and the visual gestural tools of the upper body form a seamless semiotic continuum.
One clear advantage of an articulatory gestural account of phonology is that it gives a dynamic, physiological basis to our ability to use a language to communicate. Moreover, such an approach might also help us to account more holistically for the ability to handle fast (i.e., normal), reduced, co-articulated connected speech in everyday spoken communication. Not only do we hear such speech, but our prior physiological experience of language use helps us to anticipate and fill-in information missed from the audible portion of the stream of speech.
An articulatory gestural account that focuses on the face and the mouth most vitally allows us to reconcile the natural, untutored, pre-literate language development of a native speaker with the possible course an L2 learner would be better off following. Consider, native language acquisition depends crucially on both auditory and visual inputs and feedback from caregivers. Even if fluent, stable language ability in humans has shifted heavily toward the auditory part of the semiotic continuum, it seems most likely that visual input (in coordination with the stream of speech) from the faces of immediate caregivers provides necessary types of both input and feedback to infants acquiring a language. Note just how an infant must experience language and its development: the infant experiences making movements in its own vocal tract and face; s/he feels and hears the sounds thus produced directly through the medium of the head; s/he hears the sounds going through the air and back into the head by way of the ear; s/he hears the caregiver respond (often in exaggerated and simplified adult speech); s/he most crucially sees in three dimensions the facial and upper body movements of the caregiver. No idealized schematics of the interior of the vocal tract of either the infant or the caregiver are required, nor is a visual perspective on the inside of any human mouth necessary. For a schematic overview that relates possible phonological units with type of interaction and/or mode of reception, see Figure Two (link to Figure Two below):
Hyperlink to Figure Two Graphic.
What is electromyography?
Electromyography is a means to measure and graphically record in controlled settings the electrical activity of muscles, including, of course, the muscles used in producing speech. Muscles generate electric current when contracting or when the controlling nerves have been stimulated. Electrodes usually attached to an abraded area of the skin over the muscle pick up the impulses. The output of the muscles can then be displayed as wavelike forms on an oscilloscope and recorded as an electromyogram (EMG). The audible signals which stem from the activity of the vocal tract can also be recorded simultaneously, though it must be remembered that this audible stream of speech is an acoustic realization that results from the underlying psychological cognition, including sub-lexical, sub-syllabic manipulation of phonological units into larger structures of language (even if speech control is experienced at a point of subconscious control).
Language conceptualization and language planning, as cognitive processes, causally precede, but also overlap with speech production. However, the relationship in actual speech performance is a complex one: self-monitoring of speech (both acoustic and articulatory) as well as visual and acoustic feedback from an interlocutor can to alter or reinforce planned speech, which then affects the subsequent articulatory performance of the speaker.
Electromyographic techniques could be used to measure activity all through the internal parts of the vocal tract; however, such applications would prove impractical without very invasive--even surgical-placement of the electrodes. Moreover, once in place, the set up would interfere with normal speech. After Koyama et al. (1976), we instead propose the application of completely external, facial electromyographic techniques. This is because we are looking for the sort of common, salient visual and kinesthetic experiences, inputs and feedback that might naturally guide young language learners in their phonological development as an integral part of the greater category called language acquisition. In other words, we propose the use of electromyography as a means to better grasp, analyze and present what is most invariant and perhaps even holistically essential about phonological development in such a way that these insights can be applied to L2 teaching and learning.
Data collection efforts and what the results reveal
Our data collection efforts are still somewhat preliminary and have involved only one subject (an American native speaker of English, one of the authors, Jannuzi). We have collected extensive data sets on both the vowels and the consonants of English, and would next like to generalize this to a larger group of native speakers of English. Moreover, in the future we plan to correlate visual and audio materials systematically with electromyographic data so as to triangulate the physiological-kinesthetic elements of controlled language production and perception with the concurrent visual and acoustic phenomena. However, in presenting our conceptual and theoretical work here, we also have the experimental and pedagogical insights of Koyama et al. to draw on. They have already shown, using electromyographic data and photographs of the face, that there are specific but regular ways of using facial muscles in pronouncing the English consonants. Moreover, they demonstrated how electromyographic data generated during speech can be used to isolate points of instruction so that teachers can better train Japanese EFL learners in pronunciation and phonological development.
We have already been able to come to some tentative but interesting conclusions about the possible physiological and articulatory gestural aspects of phonology. For example, a phonemic account of English might place /l/ alongside /r/ as an important contrast that an English speaker has to make. However, teachers must ask if saying there is a single contrast is actually very useful in order to teach students how to make the sounds during actual communication. Acoustically speaking, English /l/s and /r/s produced in some environments can appear to sound quite similar and hard to distinguish (perhaps because of three-formant, voiced aspects, which make both /l/ and /r/ much like vowels). Phonetic or featural differentiation, when it hits upon points of articulation, starts to be somewhat more useful. It is usually taught that an English /l/ is an alveolar lateral whereas the /r/ is post-alveolar or retroflex. But both the /l/ and /r/ in actual language display an enormous, confusing range of variation.
Can an articulatory gestural account focused on the facial muscles involved in speech (namely, M. temporalis, M. masetter, M. levator labii superioris alaeque nasi, M. orbicularis oris, M depressor labii inferioris, M digastricus venter anterior) help to differentiate what are syllables or words with /r/ sounds from those with /l/? Our initial conclusion is, yes it could. A very preliminary exploration of these two sounds focusing on the muscles of the face indicates that there is clear, visible differentiation patterns to be found across /r/s and /l/s. Most significantly, a syllable or word that begins with an /r/ sound is articulatorily pre-positioned to a mouth shape somewhat like an English /w/ or the vowel /u/, no matter what the following vowel that forms the nucleus of the syllable is. On the other hand, in terms of facial movement and anticipatory shaping of the mouth, the /l/ is far different. Visual investigation reveals that typically the /l/ coarticulates with the following vowel; in articulatory-gestural terms, we could say that the vowel that is supposed to follow the /l/ is articulatorily anticipated before the /l/ is actually made. We plan to explore this sort of physiological patterning in much greater detail, with focuses on the English /l/, /r/ and English's rather large, difficult sets of affricates and fricatives, which are problematic for learners of various native language backgrounds. Also, careful analysis of actual electromyographic data has revealed other patterns that, while not necessarily contradicting phonemic or featural accounts, can be used to supplement and clarify them.
In brief, here are some of the more startling aspects to language that electromyography has revealed:
1. Despite what traditional theory says, no phonemic or featural distinctions are singular differences. For example, a phonemic account of the English /l/ and /r/ sounds would say that the contrast rests on the difference of a single phoneme. That is, the /r/ in the word 'ray' is very similar, acoustically speaking, with the word /l/ in 'lay'; however, /r/ has the added 'feature' of retroflexion. However, electromyographic analysis reveals far more detail. English /r/ is more like English /w/, but can be differentiated from /w/ in terms of muscular activity, timing, and a slight difference in the shape of the mouth. Also, English /r/ also forms relationships with preceding vowels when it closes a syllable, while English /w/ only acts as the onset of a phonological syllable, not its coda. For example, English /l/, in terms of muscular activity, is much more like the English /d/, except in the visually perceivable aspect of timing--that is, an /l/ sound lasts longer than a /d/, and this difference can be found in the electromyographic data as well as visually perceived on the face of the speaker. In the position of the end of a word or syllable, English /l/ might also alter its gestural form through a relation with a preceding vowel sound.
2. Electromyographic data give tantalizing, psychologically significant hints as to the language planning stage of speech production, which falls in between conceptualization and actual speech production. In fact, electromyographic data reveal direct evidence of the physiological control that both precedes and accompanies actual speech production. One counterintuitive aspect thus revealed concerns the intuitive notion of speech being a sequence of sounds. In terms of the muscular activity that precedes speech production, we cannot say that speech is, phonologically speaking, a simple sequence of sound segments. For example, a one-syllable word ending in a /p/ consonant might display more muscular activity before even the first segment of the one-syllable word has been produced as sound. If we contrast the words 'mop' and 'mob', we see that the word-final /p/ sound is signaled in terms of muscular activity even before the initial /m/ has been produced. In other words, in terms of muscular energy used even before the word is uttered, the initial /m/ of 'mop' displays a higher energy level than the initial /m/ of 'mob' which could only be causally accounted for by the effect of having to plan for the pronunciation of a final /p/ instead of a final /b/.
3. As the example in number 2 above shows, the electromyographic data offers indirect evidence of a physiological interface between language planning and actual speech production. However, it is not clear at what level of language we can say that the articulatory gesture subsists. On the one hand, it would seem to be a logical and useful sub-lexical unit of phonology that can subsume more static and incomplete models, such as the phoneme or feature. On the other hand, it might more closely match up with the unit of language known as a syllable. Or, more startlingly, it might be that, in connected speech, the articulatory gesture as a unit actually coincides with words and lexical phrases. Certainly the manifold differences that the electromyographic data reveal could be used to support the argument that one articulatory gesture equals one spoken syllable type or even one word.
Phonological coding ability
From theoretical and experimental standpoints, we argue that a facially salient articulatory gesture is the best model of psycholinguistically controlled speech at a sub-lexical, sub-syllabic level. However, within a more comprehensive view of language and literacy, there still might be a place for the concepts of phonemes and features. Certainly there are yet still more conceptual areas we must look at before we can begin to account for phonology in language acquisition, language learning, listening outside of face-to-face interaction, and literacy development. First, there are cognitive, linguistics skills called phonological coding (or processing) ability (PCA), centered on:
-Phonological perception and interpretation of phonetic or phonetically graphic data,
-Analysis/decoding of acoustic (and/or visual, graphic) signals in oral communication (and/or written discourse), -Re-coding/encoding of linguistic input for lexical access and word recognition,
-Re-coding/encoding of linguistic input for comprehension and meaning making,
-Retention of language representations in short-term working memory (more specifically, phonological memory), and the linking of ALL these preceding points with long-term memory input, storage and retrieval--which is what makes phonological coding skills central to language learning, since phonology must be manipulated and stored as units such as features, phonemes, mora, syllable types, articulatory gestures/gestural routines and words (lexical units).
Metacognition: Awareness Skills in Language and Literacy Development
It is now a fairly well disseminated idea that language awareness and short-term memory skills at a phonological level play an integral role in literacy development in languages with alphabetic orthographies (Elkonin, 1963; Liberman, 1973; Liberman, Shankweiler, Fischer & Carter, 1974; Williams, 1995; Nation & Hulme, 1997; Stahl, Duffey-Hester, & Stahl, 1998; from a cross-linguistic perspective using pure research techniques, see Koda, 1987, 1998; for an ESL perspective using applied research case studies, see Birch, 1998). These skills are thought to comprise a metacognitive type of analytic ability which over-layers verbal language processing but remains separable from what have traditionally been called phonics skills, the latter of which emerge as part of reading in an alphabetically language. Thus, it is thought that phonological awareness skills follow from the phonological processing, production and perception skills that develop as a result of native language acquisition. However, they precede the development of phonics skills and beginning literacy and may play some sort of causal role in reading development. The related concepts of phonological and/or phonemic awareness are not well established within foreign language education, coming as they do mostly from theorizing about and research on native literacy in languages that are written alphabetically.
Epistemologically speaking, phonological and phonemic awareness abilities would seem to subsist somewhere between what Skehan (1998) calls 'phonological coding ability' and what have been traditionally termed phonics skills in native language arts. Phonics skills only come into play when alphabetic or syllabic writing conventions are associated with and/or analyzed into some sort of phonological equivalent during the reading and writing of text. In the case of reading written English, single letters and letter combinations functioning as graphemes (units of writing corresponding to units of sound) would be made to stand for single sounds and sound combinations in some sort of psycholinguistic process during reading; these representations might then be related to the phonology of spoken English to facilitate lexical access, which would then lead to the integration of lexical meaning into syntax and discourse. Phonological awareness skills may serve as some sort of metacognitive bridge between oral and written language processing.
It is often asserted that phonological awareness/metaphonological skills emerge before and may even causally underlie beginning literacy (hence the need to distinguish them from what has been called traditionally phonics. It might also be argued that this ability to manipulate an internalized language phono-analytically leads to the acquisition of phonics skills at decoding and manipulating alphabetic writing--especially if phonics skills are a key part of beginning literacy development and subsequent functional literacy. Phonological awareness skills are thought to be activated as a sub-component of the reading process because they help a reader (as language user) to decode and reconstruct information sampled from an alphabetically written text and relate it at one specific level with the reader's internalized phonology of the language being read. Such a step may be especially important in developmental literacy.
There is a different view, however, in which phonological awareness skills are seen as a fairly spontaneous development bridging native language acquisition of phonology with literacy development. This might undercut the hypothetical predictive, explanatory and instructional value of phonological awareness in reading instruction, since this view would make such skills appear to be more a result of success at beginning literacy than a causative factor underlying it. Another vexed issue is the orthography of English itself; although written alphabetically, English violates the alphabetic principle (one symbol=one sound) severely in numerous ways, to an extent that the reality of phonological and phonic reading of it has to be drastically circumscribed, if not placed entirely in doubt. The language levels at which it can be said the code of written English is stable and determines the language read would be at the word level and above. For a highly adumbrated overview of the stages of phonological development in a literate society, see Figure Three (hyperlink to Figure Three below):
Hyperlink to Figure Three Graphic.
Conclusion
There are various approaches to account for the production, transmission and perception of a language's phonology. However linguistically interesting such accounts and approaches are, how much useful information do they provide to teachers and students? One might conclude that many of the terms and models used to teach a second or foreign language phonology simply are not useful for adolescents and adults learning an L2s phonology, and, what is worse, they might be confusing to the extent they hold back learning. The problem might not be just one of technical complexity. The meaning of terms in phonological discourse may be too opaque and unnatural to students and even teachers. But however technical, the concepts could be too simplistic and static to do justice to the phonological, phonetic and physiological complexity that an L2 learner must deal with in mastering second or foreign language phonology. We have described and explained facial electromyography in support of a simplified gestural model of phonology. The electromyographic techniques we propose not only gives direct evidence in support of a gestural model, we argue it also considerable potential for the pedagogy of FL phonology in terms of teacher training and materials development (such as learning feedback software). What remains to be done must follow two major courses. First, we plan to expand our use of data gathering with electromyography to include a larger group of English native speakers, including a variety of dialects and accents. Electromyography will also be used to explore how to give useful and specific feedback to Japanese speakers learning English pronunciation. The second major track that we will pursue is the development of improved teaching techniques and learning materials that take advantage of the improved models and concepts of phonology that we have explained here.
References
Birch, B. (1998). Nurturing bottom-up reading strategies, too. TESOL Journal 7(6), 18-23.
Browman, C. P., & Goldstein, L. (1992). Articulatory phonology: An overview. Phonetica, 49, 155-180. Elkonin, D. B. (1963). The psychology of mastering the elements of reading. In B. Simon & J. Simon
(Eds.), Educational psychology in the U.S.S.R. (pp. 165-179). New York: Routledge. Harris, T. L., & Hodges, R. E. (Eds.)(1995). The literacy dictionary: The vocabulary of reading and writing. Newark, DE: International Reading Association.
Koyama, S., T. Okamoto, M. Yoshizawa & M. Kumamoto (1976). An electromyographic study on training to pronounce English consonants unfamiliar to the Japanese. Journal of Human Ergology, 5, 51-60.
Koda, K. (1987). Cognitive strategy transfer in second language reading. In J. Devine, P.L. Carrell, & D.E. Eskey (Eds.), Research in English as a Second Language (pp. 127-144). Washington, D.C.: TESOL. Koda K.(1998). The role of phonemic awareness in Second Language reading. Second Language Research (London), 14(2), 194-215.
Liberman, I. Y.(1973). Segmentation of the spoken word and reading acquisition. Bulletin of the Orton Society 23, 65-77.
Liberman, I.Y., Shankweiler, D., Fischer, W.M., & Carter, B. (1974). Explicit syllable and phoneme segmentation in the young child. Journal of Experimental Child Psychology 18, 201-212.
Nation, K. & Hulme, C. (1997). Phonemic segmentation, not onset-rime segmentation, predicts early reading and spelling skills. Reading Research Quarterly 32, 154-167.
Skehan, P. (1998). A cognitive approach to language learning. Oxford: Oxford University Press.
Williams, J. (1995). Phonemic awareness. In T. L. Harris & R.E. Hodges (Eds.), The literacy dictionary, (pp. 185-186). Newark, DE: International Reading Association.
Note: the graphics for this article are found at the following location:
http://picasaweb.google.com/jannuzi/PhonologicalModels#
Also accessible by clicking on the graphic:
Charles Jannuzi, University of Fukui, Japan
Introduction
This paper summarizes the analysis and interpretation of the results of two electromyographic procedures in experimental phonology. The results of electromyographic experiments have been interpreted and analyzed using concepts and theory from linguistics, applied linguistics, and phonology, specifically articulatory phonology. The first electromyographic procedure on one native speaker of English obtained data on the consonant sounds of English. The second electromyographic procedure was used to explore the large vowel system of English.
Based on the results of these experiments, we propose a new theory about the basic sub-lexical unit of speech production and perception. This paper posits a new, discrete, invariant, psychological unit of phonology that functions below the level of word meaning to organize language. This model is a variation of the articulatory gesture of articulatory phonology and phonetics, and it has implications and applications relevant in many areas of applied linguistics and language education, including native language arts, second and foreign language learning, and literacy. In order to contrast the new concept with the previously established concepts of the 'phoneme' and 'feature', we will call the new phonological prime the 'visual articulatory gesture', or, alternatively, it can be referred to as the 'facially salient articulatory gesture'. The advantage of this new basic sub-lexical unit in phonology--and as a model for applied phonology in support of TEFL--is not merely the need in linguistics, applied linguistics and educational linguistics for an abstract model that makes better phonetic and psychological sense. Rather, we feel strongly that any model more true of linguistic and psychological reality will yield better concepts, principles and practices for the classroom and materials.
The theory that emerges from our research helps to solve the problem of the lack of phonetic realism that plagues structuralist, behaviorist and formalist accounts of the phonology of a language in actual acquisition and then communicative use (production and perception). In part, this model of phonology is based on a view of language as a learning system that builds up to a learned, stable state of functional complexity (that is, the flow from language acquisition and learning to fluent use of a language to learn and communicate).
The 'learning to learn' stage involves necessary and sufficient inputs and feedback from visual, acoustic-phonetic and kinesthetic signals. We call the most basic, sub-lexical, phonological unit of this model (and indeed all language use) the 'articulatory gesture'. However, unlike previously established conceptualizations of the term 'articulatory gesture', which never really address what is meant by the term 'gesture', our basic sub-lexical unit involves 'faciality' or 'facial salience' in the visual and physiological components.
In this way we clarify why articulatory gestures are gestural in a linguistic sense and can help account for rapid, reduced, connected, co-articulated speech. Unlike the descriptively simplistic but non- explanatory abstractions of the phoneme or feature, articulatory gestures ARE NOT merely formalizations of repetitious, sequenced movements of articulators tracked at prominent points of articulation. Rather, the articulatory gesture as a unit of phonology helps models psychological control of both language production and perception. For a schematic overview of the articulatory gesture with the previously established analogues (see Figure One, link to graphic below).
Hyperlink to Figure One Graphic.
Legacy concept: the structuralist phoneme
This term is perhaps most often defined and thought of in linguistics and language teaching as the smallest sound unit to create lexical contrasts in a language. For example, we might posit the existence of the /b/ phoneme in 'bat' or 'bin' if we contrast them with the rhyming words 'at' or 'in' and see that these words differ from the former by the absence or presence of one consonant phoneme. Or we might isolate a vowel contrast by placing 'bet' alongside 'bit', thereby helping us to distinguish between the vowels /e/ and /i/. There is something troubling, however, about the need for using words or lexical level meaning to help us define or determine what sub-lexical and even sub-syllabic sound segments are. Moreover, we have to think of phonemes as idealized or psychological categories of sounds, not actual instantiations of sound categories. This is done to the point where phonemes subsist as mentalist or social super-structuralist objects subsisting in some non-material realm, shorn entirely of their phonetic identities. Another aspect to consider is just where in words phonemes can occur--that is be instantiated. We might think of the nasal- velar sound at the end of the word 'ring' as an example of a phoneme of English, but the distribution rules for that phoneme in English determine that such a sound is not possible at the beginning of a syllable or word.
Unfortunately, overall, the 'phoneme' does not hold up to any close linguistic scrutiny of how languages are actually spoken, conveyed through the air as sound, received as audible material, and then perceived or integrated into linguistic understanding and memory. Real speech doesn't naturally segment--we don't speak in discrete blips of Morse code. We can artificially segment spoken English, but the 'sounds' you will encounter will far exceed the 44-48 inventories phonemic accounts always give. The phoneme cannot be found in the mouth; it cannot be found in the air coming out of the mouth; it cannot be found in the air going to someone's ear; and it cannot be found in someone's ear. So then we are supposed to believe it is a socio-structuralist or psychologically real object, in which case we hardly need phonetic criteria to delimit it. And this is why phonetic analysis of phonemes always flounders on phonological nonsense or at least phonetic oversimplifications, if not all-out contradictions.
Take for example two of the most common types of sounds in English--indistinct, neutralized vowels of very low productive intensity (such as unstressed /i/ and schwa) and glottal consonants, which are articulated at the extreme ends of the vocal tract (the glottis and front of the mouth). How should we phonemically interpret the schwa? Is it phonemically speaking the most common vowel sound of English and a category in its own right, or is it so common because it is an unstressed allophone of so many other vowels? Why should so many distinct vowels converge on the same sound for an unstressed allophone? Or what about geminate glottal consonants (such as the glottal /t/ we might find in the middle of the word cattail)? Phonemic accounts of the schwa or the glottal geminate might say that they are phonemes in their own right and that when they alternate with other sounds, these are processes of morphophonemic alternation. However, what if we said the schwa is just a phonetic variant of most of the vowels of English? And the glottal geminate a variation of English consonant stops. After all, there is enough phonetic similarity to make the case.
Other difficulties of interpretation and explanation abound. How should we phonemically interpret vow els in languages with diphthongs and triphthongs? How should we actually phonemically interpret the 'ng' sound(s) at the end of 'ring' or 'sing'? Native intuition is that they are two sounds or sub-syllabic elements, such as two concurrent, distinct but overlapping features (which is why no one has a real problem with the digraph of the orthography). But you could put these words into minimal pairs and come up with all sorts of contrasts. Ring vs. rig, sing vs. sin, etc. One phonemic account might make the sound fall under the same category as 'ng' because the 'ng' sound is the opposite of the 'ng' and comes only at the beginning of a word or syllable--except the failure to meet any criterion of phonetic similarity might be invoked. Or how should we treat the in-/im- prefix? Is the prefix of 'inert' and 'immobile' different in its forms because of morphophonemic variation or could one argue that either the /n/ or the phonetically similar /m/ is actually an allophonic variation of a phoneme? The phonemic model for teaching and learning a foreign language's phonology predominates and is largely a formal model inherited from structuralism. Even if we supplement or supplant the idea of phonemic segments (segmentals) with suprasegmentals (e.g., intonation), the basic idea still centers learning pronunciation on the perception of arbitrary, social-systemic contrasts enabling an individual as language user or learner to understand spoken language.
However, it is impossible to locate the phoneme or contrastive segment in articulation, acoustics in physical space, or in reception and immediate analysis of the speech signal. This delimits it, if it exists at all, as a largely inaccessible and overly abstract, logical category taken away from actual speech and the psychological control of the vocal tract. Such opaque, black box concepts do not transfer well to the classroom, where effective simplification is necessary for teaching and presentation to have an impact on language learners.
One might ask of the phoneme, if we do not say it and cannot naturally find it in the speech signal, why do we think it actually exists in language? It could be conjectured that the concept of the phoneme is actually an metalinguistic artifact of psychological perception--a super category imposed on sounds and the vocal tract--stemming from linguistic insights about meaningful language use or even literacy in an alphabetic language. We could also argue that some sort of concept of the phoneme is a convenient fiction which allows us to refer more consistently to key points and manners of articulation in written language than does standard English spelling, which is more geared toward preserving etymological relationships across inflected and derived forms.
Legacy Concept: the Contrastive Feature
Early on in structuralist approaches to phonology (and then later, transformational ones as well) another idea was posited that supplemented in more detail the earlier concept of the contrastive phoneme--that of the contrastive feature. Phonemes, it was theorized, could be broken down further into distinctive features. For example, whether or not they are phonemically contrastive, phonetically speaking, what typically sepa rates a language's /t/ from its /d/ is the feature of vocalization. Or, another example is how voice and a lack of aspiration might distinguish an initial /p/ from an initial /b/. One difficulty of breaking phonemes down into features, however, is that the aspects of speech that have been called features are a confusing mix of psychological, articulatory-gestural, phonetic and acoustic phenomena.
Typical notions of features move back and forth between articulatory criteria (a point or manner of articulation in the vocal tract or respiratory tract or an acoustic effect found on an oscilloscope). Is something a feature only if a listener hears it or could a feature be something that is physiologically experienced and subsequently anticipated by the speaker? A second problem is that, as described in much discourse, they are not truly sub-syllabic; at least phonetically and acoustically speaking, features demonstrably spread over whole syllables, words and even word boundaries. Features, then, if we actually break up speech in order to demonstrate their existence, are supposed to work more like the various notes of a chord either struck almost simultaneously or plucked out in quick succession but sustained and stretched over an entire bar (in this case syllables and syllable sequences) to create harmonies and dissonance.
Evolution of language as gestural in nature
The human ability socially to convey thoughts, intentions, emotions, beliefs, and culturally bound ways of living largely depends on the structured use of language. This cognitively controlled, structured system for communication, we contend, evolved first as a visual-gestural system of body language quite analogous to the sign language of the deaf in use today. That is, we are talking about a gestural language that involves not only the hands and arms, but also movements of the muscles of the face to produce a form of controlled speech that is more reliant on the visual conveyance of information than the acoustic mode. The full development of human language as we now know it, however, overlapped with the emergence of considerable auditory and phonetic abilities crucial to the survival of the human species. These beneficial auditory and phonetic talents also took on communicative functions contributing to the survival and adaptation of the species.
Over time the visual-gestural system of language converged with the auditory and phonetic powers to produce what we know today as the human language facility. It might make more sense to view the auditory and phonetic aspects of human language as dominant over the visual and gestural ones. Also, not all visual-gestural aspects of communication are linguistic in nature, though many can be specific to particular groups and cultures. However, it might well be the case that visual and gestural abilities are still more integral to the psychology of language control and acquisition. For example, the use of gesture is two-part. It provides a visual signal for someone at the receiving end, but the person producing the gesture also experiences it physiologically.
The ability to communicate with a human language depends essentially on a psychologically controlled, coordinated speech and auditory system for the planned production and meaningful perception of language. It should be pointed out that speech production itself depends on a convergence of more basic systems, such as the ones, which hear, breathe, eat, and make non-linguistic noise. And hearing has a more basic non-linguistic role enabling humans to distinguish and make a phonetically diverse set of noises for communication, such as signaling and sound camouflage. Human language, however, has more essential aspects to it than speaking and hearing perception. It also involves visual and kinesthetic aspects and structural complexity that ranges from the phonological to the lexical to the syntactical. The visual and kinesthetic aspects of phonology, however, quite likely play an even larger role in language acquisition than they do in mature, fluent, native language use for everyday communication.
A new, more pedagogically useful phonological prime proposed
In lieu of the previously established concepts of the phoneme and feature, we call the most basic, sub-lexical, phonological unit of speech production and processing the 'articulatory gesture'. However, unlike previous conceptualizations of this term (for example, Browman & Goldstein, 1992), our basic sub-lexical unit involves 'faciality' or 'facial salience' in order to explain how a unit of speech can function as a linguistic 'gesture'. It must be noted here, though, that which level of language should be used to interpret speech production and processing remains a theoretically undecided issue. Does the articulatory gesture map onto language at a sub-lexical level (such as the syllable or mora)? Or does the articulatory gesture actually correspond in an explanatory manner to the spoken and psychological level of word meaning--that is, the allomorph and morpheme? If the reality of the latter case holds, then morpho-phonology would assume primary importance in any research program. Clearly more conceptual, theoretical and experimental undertakings are required for this issue to begin to be resolved.
Using linguistic analysis and interpretation of the results of two experiments in electromyography, we propose a new theory of phonology concerning the basic unit of sub-lexical language. Modern phonology has long sought a basic, psychological, discrete, invariable unit of language subsisting beneath a word level in order to closely model, describe and explain language acquisition, processing, perception and expression. That is, in order to solve the age-old problem of 'how infinite use is made of finite means', phonological inquiry needs a basic, stable, sub-lexical unit that works across all aspects of a language and across all speakers of a language. Such a unit is not only deduced to exist because speech can be segmented into consonants and vowels that form syllables. This could simply reflect a phonetic reality of speech that has been analyzed by linguists. Rather, a phonological prime subsisting at a sub-lexical level of language must also function as a unit in the mental language planning stage that controls meaningful language use.
Some Implications for Applied Linguistics and Educational Linguistics
Such a psychologically, physiologically and phonetically realistic basic unit for phonology should yield better teaching and learning materials (including software) in applied, practical and clinical areas such the following: (1) foreign language teaching and learning, (2) speech therapy, and (3) learning disabilities, such as reading and text processing disorders. This approach should also have important implications for the development of speech recognition for automated word processing, language translation and artificial intelligence.
There are available in applied linguistics various approaches to studying, describing, analyzing and explaining the production, transmission and perception of a language's phonology. However, one crucial problem is turning it into useful information for second or foreign language pedagogy. Phonological concepts and terminology for teaching often seem overly complex and abstract--if not outright contradictory-to both teachers and students. One possible reason for this perceived difficulty is that, in fact, many of the terms and models used to teach phonology simply are not useful for adolescents and adults learning a phonology. On the one hand, the meaning of terms in phonological discourse comes to seem opaque to students and even the teachers attempting to explain and demonstrate them. On the other hand, the concepts are too simplistic and static to do justice to the phonological, phonetic and physiological complexity that a learner must deal with in mastering a second or foreign language's phonology.
The articulatory gesture and its implications for language learning
A third approach to FL pronunciation and phonology would be to appeal broadly but coherently to those aspects of speech that apply to the phonetics, physiology as well as psychology of speech. Rather than being determined through analysis of static, binary contrasts, the sub-syllabic units of speech are deduced to exist and represented through dynamic descriptions of a complex of movements occurring in the vocal tract, mouth and facial muscles. This is called an articulatory-gestural approach or articulatory phonology. According to this approach, the basic units of phonological contrast are gestures, which are also abstract characterizations of articulatory events, each with an intrinsic time or duration. Utterances are modeled as organized patterns (constellations) of features, in which gestural units may overlap in time. The phonological structures defined in this way provide a set of articulatorily based natural classes. Moreover, the patterns of overlapping organization can be used to specify important aspects of the phonological structure of particular languages, and to account, in a coherent way and general way, for a variety of different types of phonological variation. Such variation includes allophonic variation and fluent speech alternations, as well as 'coarticulation' and speech errors. Finally, it is suggested that the gestural approach clarifies our understanding of phonological development, by positing that prelinguistic units of action are harnessed into (gestural) phonological structures through differentiation and coordination. (Browman & Goldstein, 1992, p. 155)
Although not well known or understood in FLT and FLL, an articulatory-gestural approach to phonology (or articulatory phonology) may well hold out the most promise for reuniting pronunciation practice with communicative language teaching and learning. One problem with any theory that seeks to explain how language is spoken because of what the tongue wants (ease of articulation) is that it might not take into account what the ears easily hear. A language user's vocal tract that has to repeat itself with emphasis actually ultimately does more work. Spoken language as a system built on give-and-take communication is pushed and pulled between the needs of the speaker and the listener (just as writing systems have to fit the needs of those who write and those who read). Going toward language that is rather indistinct, lacks intensity and overlaps sounds (co-articulated segments, super-syllabic features, reduced vowels, glottal consonants) might make it faster for the speaker, but this only has to be optimized to the level of how fast a listener (as language user) can take it in (which has physical limits). A rate of output beyond the point of what a listener can perceive does not contribute to the efficiency of production or reception since it would cause a breakdown in communication.
Different languages, dialects, and accents arrive at different sets/constellations of articulatory gestures (or articulatory routines) to get the job done. If there is considerable overlap of grammar and lexicon, then mutually intelligible forms of languages can exist, despite quite a bit of variation in how things are pronounced.
Any number of ways could arrive at basically the same speed of output for optimum reception and would be well below our maximum speed of output if our lazy tongues ruled our heads. But what would be the point of being able to speak so fast and indistinctly that no one could understand you?
A facially prominent (visually salient) account of articulatory gestures
Following the example established with the electromyographic analysis and pedagogical recommendations of Koyama, Okamoto, Yoshizawa, and Kumamoto (1976), we propose to take prior accounts of the articulatory gesture and modify and simplify their focus for the purposes of L2 pedagogy and learning. The rationale for this is, in part, based on our understanding of both language evolution and natural language development in individuals. One possible way to account for the human ability to make meaning systematically is to see the human vocal ability with language as a fortuitous adaptation of our respiratory, upper digestive and auditory tracts that extends our ability to gesture semiotically. The face, however, is a transitional area that serves a role both in the vocal apparatus and in the purely visual-gestural system. Indeed, with the face's and mouth's exterior as an interface or transitional zone, it could be said that the vocal apparatus and the visual gestural tools of the upper body form a seamless semiotic continuum.
One clear advantage of an articulatory gestural account of phonology is that it gives a dynamic, physiological basis to our ability to use a language to communicate. Moreover, such an approach might also help us to account more holistically for the ability to handle fast (i.e., normal), reduced, co-articulated connected speech in everyday spoken communication. Not only do we hear such speech, but our prior physiological experience of language use helps us to anticipate and fill-in information missed from the audible portion of the stream of speech.
An articulatory gestural account that focuses on the face and the mouth most vitally allows us to reconcile the natural, untutored, pre-literate language development of a native speaker with the possible course an L2 learner would be better off following. Consider, native language acquisition depends crucially on both auditory and visual inputs and feedback from caregivers. Even if fluent, stable language ability in humans has shifted heavily toward the auditory part of the semiotic continuum, it seems most likely that visual input (in coordination with the stream of speech) from the faces of immediate caregivers provides necessary types of both input and feedback to infants acquiring a language. Note just how an infant must experience language and its development: the infant experiences making movements in its own vocal tract and face; s/he feels and hears the sounds thus produced directly through the medium of the head; s/he hears the sounds going through the air and back into the head by way of the ear; s/he hears the caregiver respond (often in exaggerated and simplified adult speech); s/he most crucially sees in three dimensions the facial and upper body movements of the caregiver. No idealized schematics of the interior of the vocal tract of either the infant or the caregiver are required, nor is a visual perspective on the inside of any human mouth necessary. For a schematic overview that relates possible phonological units with type of interaction and/or mode of reception, see Figure Two (link to Figure Two below):
Hyperlink to Figure Two Graphic.
What is electromyography?
Electromyography is a means to measure and graphically record in controlled settings the electrical activity of muscles, including, of course, the muscles used in producing speech. Muscles generate electric current when contracting or when the controlling nerves have been stimulated. Electrodes usually attached to an abraded area of the skin over the muscle pick up the impulses. The output of the muscles can then be displayed as wavelike forms on an oscilloscope and recorded as an electromyogram (EMG). The audible signals which stem from the activity of the vocal tract can also be recorded simultaneously, though it must be remembered that this audible stream of speech is an acoustic realization that results from the underlying psychological cognition, including sub-lexical, sub-syllabic manipulation of phonological units into larger structures of language (even if speech control is experienced at a point of subconscious control).
Language conceptualization and language planning, as cognitive processes, causally precede, but also overlap with speech production. However, the relationship in actual speech performance is a complex one: self-monitoring of speech (both acoustic and articulatory) as well as visual and acoustic feedback from an interlocutor can to alter or reinforce planned speech, which then affects the subsequent articulatory performance of the speaker.
Electromyographic techniques could be used to measure activity all through the internal parts of the vocal tract; however, such applications would prove impractical without very invasive--even surgical-placement of the electrodes. Moreover, once in place, the set up would interfere with normal speech. After Koyama et al. (1976), we instead propose the application of completely external, facial electromyographic techniques. This is because we are looking for the sort of common, salient visual and kinesthetic experiences, inputs and feedback that might naturally guide young language learners in their phonological development as an integral part of the greater category called language acquisition. In other words, we propose the use of electromyography as a means to better grasp, analyze and present what is most invariant and perhaps even holistically essential about phonological development in such a way that these insights can be applied to L2 teaching and learning.
Data collection efforts and what the results reveal
Our data collection efforts are still somewhat preliminary and have involved only one subject (an American native speaker of English, one of the authors, Jannuzi). We have collected extensive data sets on both the vowels and the consonants of English, and would next like to generalize this to a larger group of native speakers of English. Moreover, in the future we plan to correlate visual and audio materials systematically with electromyographic data so as to triangulate the physiological-kinesthetic elements of controlled language production and perception with the concurrent visual and acoustic phenomena. However, in presenting our conceptual and theoretical work here, we also have the experimental and pedagogical insights of Koyama et al. to draw on. They have already shown, using electromyographic data and photographs of the face, that there are specific but regular ways of using facial muscles in pronouncing the English consonants. Moreover, they demonstrated how electromyographic data generated during speech can be used to isolate points of instruction so that teachers can better train Japanese EFL learners in pronunciation and phonological development.
We have already been able to come to some tentative but interesting conclusions about the possible physiological and articulatory gestural aspects of phonology. For example, a phonemic account of English might place /l/ alongside /r/ as an important contrast that an English speaker has to make. However, teachers must ask if saying there is a single contrast is actually very useful in order to teach students how to make the sounds during actual communication. Acoustically speaking, English /l/s and /r/s produced in some environments can appear to sound quite similar and hard to distinguish (perhaps because of three-formant, voiced aspects, which make both /l/ and /r/ much like vowels). Phonetic or featural differentiation, when it hits upon points of articulation, starts to be somewhat more useful. It is usually taught that an English /l/ is an alveolar lateral whereas the /r/ is post-alveolar or retroflex. But both the /l/ and /r/ in actual language display an enormous, confusing range of variation.
Can an articulatory gestural account focused on the facial muscles involved in speech (namely, M. temporalis, M. masetter, M. levator labii superioris alaeque nasi, M. orbicularis oris, M depressor labii inferioris, M digastricus venter anterior) help to differentiate what are syllables or words with /r/ sounds from those with /l/? Our initial conclusion is, yes it could. A very preliminary exploration of these two sounds focusing on the muscles of the face indicates that there is clear, visible differentiation patterns to be found across /r/s and /l/s. Most significantly, a syllable or word that begins with an /r/ sound is articulatorily pre-positioned to a mouth shape somewhat like an English /w/ or the vowel /u/, no matter what the following vowel that forms the nucleus of the syllable is. On the other hand, in terms of facial movement and anticipatory shaping of the mouth, the /l/ is far different. Visual investigation reveals that typically the /l/ coarticulates with the following vowel; in articulatory-gestural terms, we could say that the vowel that is supposed to follow the /l/ is articulatorily anticipated before the /l/ is actually made. We plan to explore this sort of physiological patterning in much greater detail, with focuses on the English /l/, /r/ and English's rather large, difficult sets of affricates and fricatives, which are problematic for learners of various native language backgrounds. Also, careful analysis of actual electromyographic data has revealed other patterns that, while not necessarily contradicting phonemic or featural accounts, can be used to supplement and clarify them.
In brief, here are some of the more startling aspects to language that electromyography has revealed:
1. Despite what traditional theory says, no phonemic or featural distinctions are singular differences. For example, a phonemic account of the English /l/ and /r/ sounds would say that the contrast rests on the difference of a single phoneme. That is, the /r/ in the word 'ray' is very similar, acoustically speaking, with the word /l/ in 'lay'; however, /r/ has the added 'feature' of retroflexion. However, electromyographic analysis reveals far more detail. English /r/ is more like English /w/, but can be differentiated from /w/ in terms of muscular activity, timing, and a slight difference in the shape of the mouth. Also, English /r/ also forms relationships with preceding vowels when it closes a syllable, while English /w/ only acts as the onset of a phonological syllable, not its coda. For example, English /l/, in terms of muscular activity, is much more like the English /d/, except in the visually perceivable aspect of timing--that is, an /l/ sound lasts longer than a /d/, and this difference can be found in the electromyographic data as well as visually perceived on the face of the speaker. In the position of the end of a word or syllable, English /l/ might also alter its gestural form through a relation with a preceding vowel sound.
2. Electromyographic data give tantalizing, psychologically significant hints as to the language planning stage of speech production, which falls in between conceptualization and actual speech production. In fact, electromyographic data reveal direct evidence of the physiological control that both precedes and accompanies actual speech production. One counterintuitive aspect thus revealed concerns the intuitive notion of speech being a sequence of sounds. In terms of the muscular activity that precedes speech production, we cannot say that speech is, phonologically speaking, a simple sequence of sound segments. For example, a one-syllable word ending in a /p/ consonant might display more muscular activity before even the first segment of the one-syllable word has been produced as sound. If we contrast the words 'mop' and 'mob', we see that the word-final /p/ sound is signaled in terms of muscular activity even before the initial /m/ has been produced. In other words, in terms of muscular energy used even before the word is uttered, the initial /m/ of 'mop' displays a higher energy level than the initial /m/ of 'mob' which could only be causally accounted for by the effect of having to plan for the pronunciation of a final /p/ instead of a final /b/.
3. As the example in number 2 above shows, the electromyographic data offers indirect evidence of a physiological interface between language planning and actual speech production. However, it is not clear at what level of language we can say that the articulatory gesture subsists. On the one hand, it would seem to be a logical and useful sub-lexical unit of phonology that can subsume more static and incomplete models, such as the phoneme or feature. On the other hand, it might more closely match up with the unit of language known as a syllable. Or, more startlingly, it might be that, in connected speech, the articulatory gesture as a unit actually coincides with words and lexical phrases. Certainly the manifold differences that the electromyographic data reveal could be used to support the argument that one articulatory gesture equals one spoken syllable type or even one word.
Phonological coding ability
From theoretical and experimental standpoints, we argue that a facially salient articulatory gesture is the best model of psycholinguistically controlled speech at a sub-lexical, sub-syllabic level. However, within a more comprehensive view of language and literacy, there still might be a place for the concepts of phonemes and features. Certainly there are yet still more conceptual areas we must look at before we can begin to account for phonology in language acquisition, language learning, listening outside of face-to-face interaction, and literacy development. First, there are cognitive, linguistics skills called phonological coding (or processing) ability (PCA), centered on:
-Phonological perception and interpretation of phonetic or phonetically graphic data,
-Analysis/decoding of acoustic (and/or visual, graphic) signals in oral communication (and/or written discourse), -Re-coding/encoding of linguistic input for lexical access and word recognition,
-Re-coding/encoding of linguistic input for comprehension and meaning making,
-Retention of language representations in short-term working memory (more specifically, phonological memory), and the linking of ALL these preceding points with long-term memory input, storage and retrieval--which is what makes phonological coding skills central to language learning, since phonology must be manipulated and stored as units such as features, phonemes, mora, syllable types, articulatory gestures/gestural routines and words (lexical units).
Metacognition: Awareness Skills in Language and Literacy Development
It is now a fairly well disseminated idea that language awareness and short-term memory skills at a phonological level play an integral role in literacy development in languages with alphabetic orthographies (Elkonin, 1963; Liberman, 1973; Liberman, Shankweiler, Fischer & Carter, 1974; Williams, 1995; Nation & Hulme, 1997; Stahl, Duffey-Hester, & Stahl, 1998; from a cross-linguistic perspective using pure research techniques, see Koda, 1987, 1998; for an ESL perspective using applied research case studies, see Birch, 1998). These skills are thought to comprise a metacognitive type of analytic ability which over-layers verbal language processing but remains separable from what have traditionally been called phonics skills, the latter of which emerge as part of reading in an alphabetically language. Thus, it is thought that phonological awareness skills follow from the phonological processing, production and perception skills that develop as a result of native language acquisition. However, they precede the development of phonics skills and beginning literacy and may play some sort of causal role in reading development. The related concepts of phonological and/or phonemic awareness are not well established within foreign language education, coming as they do mostly from theorizing about and research on native literacy in languages that are written alphabetically.
Epistemologically speaking, phonological and phonemic awareness abilities would seem to subsist somewhere between what Skehan (1998) calls 'phonological coding ability' and what have been traditionally termed phonics skills in native language arts. Phonics skills only come into play when alphabetic or syllabic writing conventions are associated with and/or analyzed into some sort of phonological equivalent during the reading and writing of text. In the case of reading written English, single letters and letter combinations functioning as graphemes (units of writing corresponding to units of sound) would be made to stand for single sounds and sound combinations in some sort of psycholinguistic process during reading; these representations might then be related to the phonology of spoken English to facilitate lexical access, which would then lead to the integration of lexical meaning into syntax and discourse. Phonological awareness skills may serve as some sort of metacognitive bridge between oral and written language processing.
It is often asserted that phonological awareness/metaphonological skills emerge before and may even causally underlie beginning literacy (hence the need to distinguish them from what has been called traditionally phonics. It might also be argued that this ability to manipulate an internalized language phono-analytically leads to the acquisition of phonics skills at decoding and manipulating alphabetic writing--especially if phonics skills are a key part of beginning literacy development and subsequent functional literacy. Phonological awareness skills are thought to be activated as a sub-component of the reading process because they help a reader (as language user) to decode and reconstruct information sampled from an alphabetically written text and relate it at one specific level with the reader's internalized phonology of the language being read. Such a step may be especially important in developmental literacy.
There is a different view, however, in which phonological awareness skills are seen as a fairly spontaneous development bridging native language acquisition of phonology with literacy development. This might undercut the hypothetical predictive, explanatory and instructional value of phonological awareness in reading instruction, since this view would make such skills appear to be more a result of success at beginning literacy than a causative factor underlying it. Another vexed issue is the orthography of English itself; although written alphabetically, English violates the alphabetic principle (one symbol=one sound) severely in numerous ways, to an extent that the reality of phonological and phonic reading of it has to be drastically circumscribed, if not placed entirely in doubt. The language levels at which it can be said the code of written English is stable and determines the language read would be at the word level and above. For a highly adumbrated overview of the stages of phonological development in a literate society, see Figure Three (hyperlink to Figure Three below):
Hyperlink to Figure Three Graphic.
Conclusion
There are various approaches to account for the production, transmission and perception of a language's phonology. However linguistically interesting such accounts and approaches are, how much useful information do they provide to teachers and students? One might conclude that many of the terms and models used to teach a second or foreign language phonology simply are not useful for adolescents and adults learning an L2s phonology, and, what is worse, they might be confusing to the extent they hold back learning. The problem might not be just one of technical complexity. The meaning of terms in phonological discourse may be too opaque and unnatural to students and even teachers. But however technical, the concepts could be too simplistic and static to do justice to the phonological, phonetic and physiological complexity that an L2 learner must deal with in mastering second or foreign language phonology. We have described and explained facial electromyography in support of a simplified gestural model of phonology. The electromyographic techniques we propose not only gives direct evidence in support of a gestural model, we argue it also considerable potential for the pedagogy of FL phonology in terms of teacher training and materials development (such as learning feedback software). What remains to be done must follow two major courses. First, we plan to expand our use of data gathering with electromyography to include a larger group of English native speakers, including a variety of dialects and accents. Electromyography will also be used to explore how to give useful and specific feedback to Japanese speakers learning English pronunciation. The second major track that we will pursue is the development of improved teaching techniques and learning materials that take advantage of the improved models and concepts of phonology that we have explained here.
References
Birch, B. (1998). Nurturing bottom-up reading strategies, too. TESOL Journal 7(6), 18-23.
Browman, C. P., & Goldstein, L. (1992). Articulatory phonology: An overview. Phonetica, 49, 155-180. Elkonin, D. B. (1963). The psychology of mastering the elements of reading. In B. Simon & J. Simon
(Eds.), Educational psychology in the U.S.S.R. (pp. 165-179). New York: Routledge. Harris, T. L., & Hodges, R. E. (Eds.)(1995). The literacy dictionary: The vocabulary of reading and writing. Newark, DE: International Reading Association.
Koyama, S., T. Okamoto, M. Yoshizawa & M. Kumamoto (1976). An electromyographic study on training to pronounce English consonants unfamiliar to the Japanese. Journal of Human Ergology, 5, 51-60.
Koda, K. (1987). Cognitive strategy transfer in second language reading. In J. Devine, P.L. Carrell, & D.E. Eskey (Eds.), Research in English as a Second Language (pp. 127-144). Washington, D.C.: TESOL. Koda K.(1998). The role of phonemic awareness in Second Language reading. Second Language Research (London), 14(2), 194-215.
Liberman, I. Y.(1973). Segmentation of the spoken word and reading acquisition. Bulletin of the Orton Society 23, 65-77.
Liberman, I.Y., Shankweiler, D., Fischer, W.M., & Carter, B. (1974). Explicit syllable and phoneme segmentation in the young child. Journal of Experimental Child Psychology 18, 201-212.
Nation, K. & Hulme, C. (1997). Phonemic segmentation, not onset-rime segmentation, predicts early reading and spelling skills. Reading Research Quarterly 32, 154-167.
Skehan, P. (1998). A cognitive approach to language learning. Oxford: Oxford University Press.
Williams, J. (1995). Phonemic awareness. In T. L. Harris & R.E. Hodges (Eds.), The literacy dictionary, (pp. 185-186). Newark, DE: International Reading Association.
Note: the graphics for this article are found at the following location:
http://picasaweb.google.com/jannuzi/PhonologicalModels#
Also accessible by clicking on the graphic:
Phonological Models |
Subscribe to:
Posts (Atom)