Saturday, July 30, 2011

Regarding s-aspiration

 Hi Prof. Ohala,
Here are my comments/questions from the last class, thanks!
Comments/Questions #7 - August 1:
I was just wondering if you could clarify a point from the last class.
I had asked about language contact induced sound change (as in the case of the ‘s’
to /h/ or nothing in Latin American Spanish).  You mentioned that you thought it
was a process that had already begun and that was further solidified by
contact,but I am not sure if this is what you meant.  I was under the impression that
African language structure (both phonological and morpho-syntactic) was the
reason that these varieties of Spanish as well as Brazilian Portuguese have
this  alteration > in addition to several others.  Do you think that it could be the
contact with  these African languages that caused the change?  or must it have been
something that was already in motion?  I tend to believe that it was language
contact that induced this specific change, but I am interested in your
thoughts on the matter.

Jill Thorson

Jill,

I have no opinion (or background of facts) to be able to say whether the
contact situation of NW Spanish and Port. being assimilated by African
slaves had anything to do with the 's-aspiration' sound change.  The same
phenomenon is not unknown in other languages that did not have that
particular contact situation as a possible cause (& that's why I cited
Greek and Latin cases, too).  My point -- which was Widdison's point -- is
that there is (or could be) a phonetic causal factor, too.  One should
check the Widdison paper to see if he addresses these issues:  (Google
'Widdison' and 's-aspiration').
 
JJO 
 
Here links to two of Widdison's paper on this topic:
https://docs.google.com/viewer?a=v&pid=explorer&chrome=true&srcid=0B09lW9XhoFydMGFkOTRiMGEtNmE3OC00MGYwLTg0NzctMmQ3MDc1OTM0ZGRh&hl=en_US
https://docs.google.com/viewer?a=v&pid=explorer&chrome=true&srcid=0B09lW9XhoFydM2EzN2MxNmMtMzMyZi00NzUyLWJmMTAtZGZkMzI3ZDBjM2M3&hl=en_US 

Friday, July 29, 2011

you tube videos illustrating relation between standing waves and resonance

This doesn't substitute for a complete exposition on resonance and its origin from standing waves, but this video does illustratre how a practical  demonstration -- readily available on You Tube -- can illustrate the concepts once they have been introduced in a class.

http://www.youtube.com/watch?v=_S7-PDF6Vzc&feature=related

slides from week of 25 July

https://docs.google.com/leaf?id=0B09lW9XhoFydNWZiZTg0MmYtMThmMC00Y2I4LWJjNDItYjA5Y2Y2NzdmMWFh&hl=en_US

https://docs.google.com/leaf?id=0B09lW9XhoFydZWZkYzFmYTItOWZkMS00MDM5LTk0MmUtNjA4YjFiN2Q4ZGE0&hl=en_US

https://docs.google.com/leaf?id=0B09lW9XhoFydOWM4NjIxZDAtNWNiZS00YjA4LWE3ZDEtYjU5OGVkNGUwMTI5&hl=en_US

Wednesday, July 27, 2011

Clarification re 'velar softening'

> Hi Professor Ohala,
> In lecture on Tuesday, you demonstrated how a velar stop [k]
> sounded very similar to a [t] when you filtered out the mid
> frequency is filtered out.  Is this what happens in speech?
> Does that mid-frequency get filtered out to the listener?  Is
> there something that makes it particularly susceptible to
> that?  Since it doesn't seem like an all too uncommon change.
> Do you know if this is similar to what happened in English?  I
> was told there was some sort of change, and words like 'chin'
> in English are 'Kinn' in German and that that was a regular
> sound change in an earlier stage.  Did the [k] first become a
> [t] and then an affricate?  Or how does the alveolar affricate
> come about or is theorized to come about?
> Thank you!
> Stephanie

Stephanie,

Some good questions.  Thank you for giving me the opportunity to clarify.

It is not so much that in natural speech there is some process which
filters out the mid-frequency peak but rather that through inattention,
perhaps masking noise, etc., this spectral peak can be missed.  (And keep
in mind the mid-frequency spectral peak is primarily found ONLY when the
velar stop is released into a high front vowel; it is not the same with
other vowels.)  Crucial to my story is that this spectral peak is pivotal
to the differentiation of /ki/ from /ti/ and that if it is overlooked the
percept is that of /ti/.  You are right that the development of /ti/ to
/tshi/ (the affricated version) is a separate, subsequent, change. (I
explained earlier how aerodynamic factors can lead to turbulence when air
is forced at high velocity through a narrow constriction such as is
created in the transition between /t/ and /i/.)   And we find some cases
of this change that just involve change of place and do not involve
affrication.

JJO

Monday, July 25, 2011

Assignment 2


Phonetic Alchemy:  turing a velar stop into an apical stop
Here is a wav file of me saying ‘ski’.  If you want, you can also use a file of any other native speaker of Am. English saying ‘ski’ (make sure you retain frequencies up to c. 10 kHz).

https://docs.google.com/leaf?id=0B09lW9XhoFydNWU3ZDI0ZTktODZiNS00MmU3LWE2OGItMTk3MTUxZDlmZDJm&hl=en_US

1.        1.  Remove all of the /s/ up to the burst.  (What does it sound like?  Are you surprised?  It should be a voiceless unaspirated velar stop.)
2.      . Make a separate file of the burst up to the point where the vowel starts (evident as the start of periodicity).
3.        Apply a stop band filter to the burst (filter from 1500 to 4500 Hz; for a female voice you may have to have slightly higher limits; the idea is to filter out the mid-frequency peak in the burst spectrum).
4.    Substitute this filtered burst for the original burst.  What does it sound like?  (One listening strategy is to listen to it repeatedly and then to see if you could convince yourself it is /ti/ or /ki/).

Report your results by 2 August.

slides from 21 July

phonotactics   
https://docs.google.com/leaf?id=0B09lW9XhoFydMzc4ZjJhMTQtNzM0YS00ODliLTllZDQtYmRmNDljYzgxOGZj&hl=en_US

labial-velars + spontaneous nasalization
https://docs.google.com/leaf?id=0B09lW9XhoFydMWE1OTE1MjQtODVkNS00NjNkLTgwZTAtNDA2MTUzMjMyM2I4&hl=en_US

Sunday, July 24, 2011

Caisse's 1988 dissertation

For those who requested a link to the 1988 dissertation of Michelle Caisse (who presented evidence that phonetic influences on vowel duration are additive whereas phonological effects are multiplicative), here it is:

https://docs.google.com/viewer?a=v&pid=explorer&chrome=true&srcid=0B09lW9XhoFydYjBjNzliMzUtNTVkNi00YTE5LWEyZDktYWUzODExNjc2YzJh&hl=en_US

Friday, July 22, 2011

sword, swollen, etc

 

Prof Ohala,

In today's lecture you posited that some sound change is motivated by
phonotactic constraints and that sounds like [wu] and [ji] are rare
cross-linguistically because of a lack of modulation between the two
segments. One example you gave was that of 'sword', which bears
evidence of a [w] sound that is no longer present in modern
pronunciations. However, what are we to think about the existence of
such a sequence as [swo] to begin with? If language change moves away
from such a string of sounds, why would they be present at any stage
of a language's history?


Rebekka Puderbaugh
 
 
 
Dear Rebekka,

An interesting and valid question.  I could, of course, answer that the
acoustic-auditory pressure to avoid -wo- sequences is statistical, not
absolute, but the history of 'sword' is interesting and gives some hint as
to how the -wo- sequence came about at least in this case.  Etymologically
it is from Old Engl. sweord, which is from Proto-Gmc. swerdan.  So the -o-
no doubt is the survivor of a monophthongization of an original diphthong
where the rounded element did not abut the -w-.

I also checked the origin of other swo- and swu- words (there are very few
in English):  'swollen' is the ppt of 'swell' and might have been formed
on analogy of the ablaut pattern evident in 'tell-told', 'sell-sold',
'melt-molten' and the like.  'Swoop' has an uncertain history; but its
possible origins can be traced to words that did not have the -wu-
sequence; rather 'swa- or -soo [su].
 
[Of course, this leaves wound, womb, ... to wonder and worry about.  But as mentioned 
at the start of my reply, I can play the 'statistical' card.]]

JJO

Saturday, July 16, 2011

Speaker vs Listener-based Sound Change

Speaker-caused vs. Listener-caused Sound Change
Camille X has presented me with some examples of sound change which require a more careful ‘parsing’ of the word ‘cause’.  (Apparently we have stumbled into a domain with deep philosophical and metaphysical implications:  http://plato.stanford.edu/entries/causation-metaphysics/  ).  But let me try to simplify this by invoking the legal concepts of ‘murder’ as opposed ‘(involuntary) manslaughter’.  In both cases the result is a dead person and the one who *caused* that person to be dead.  A person is guilty of murder if they intentionally caused the death; the charge of involuntary manslaughter applies if that person caused the death but without intention (e.g., through negligence).  The legal consequences, of course, are very different.   A similar distinction applies when talking about sound change (though, thankfully, there are no legal consequences).
Even though there are sound changes such as stop emergence (e.g., Thompson from Thom + son)  that are ‘caused’ by speakers (due to premature velic closure  and glottal abduction) during the latter portion of the nasal  they are responsible only of ‘involuntarily’ creating conditions which might lead to sound change.  But in our domain (as opposed to the legal domain), the listener also can be said to have ‘caused’ the sound change because she failed to engage the usual ‘normaliazation’ or ‘correction’ processes which allow for the discounting of the emergent stop in the stated environment.  So who is to blame?  In my view, neither party is to blame.  This is why I have referred to such sound changes as due to ‘innocent misapprehensions’.   Neither the speaker nor the listener intended for a new pronunciation norm to be created; it emerged from explicable and blameless actions of both.
As in homicide cases, this example can be parsed in more detail.  If we are partitioning ‘cause’ between the speaker and listener we might also assign less of the ‘total cause’ to the speaker because she might plead that she was counting on the listener to be able to use the expected normalization or corrective measures to recover her (the speaker’s) intended pronunciation.  And ultimately in this case where one might have a chain of causal events, the listener was the last possible ‘filter’ that could have prevented the sound change.  This is why I have labeled such sound changes as ‘listener based’.
In the same class would be cases of stop affrication before high, close, vowels, as well as cases where the speaker tries to overecome  the Aerodynamic Voicing Constraint by implementing implosion, retroflexion of apicals, pre-nasalization, or ATR (Advanced Tongue Root).   
In a different class would be sound changes where the speaker can be considered to have made her articulations in a way that adhered to the canonical pronunciation but, due to physical physiological, or acoustic constraints, the output signal was ambiguous.  This happens in some cases of sound change involving labial-velar (doubly articulated consonants like /w, kp, gb, ŋm/ and to some cases of palatalized or labialized consonants [to be treated in later lectures]
In all these cases, though, it is the listener’s blameless error that introduces a new pronunciation norm, i.e., what I have referred to as a ‘mini-sound change’.  Why is this important?  Two reasons:
1)        It undermines claims that speakers introduce a new pronunciation error due to either laziness (ease of articulation, also termed ‘lenition’) or hyperarticulation (to make things clearer for the listener, also termed ‘fortition’).  These are teleological explanations, i.e., they attribute to the speaker some goal, some purpose.  If speakers had such freedom to change pronunciation norms then sound change would occur more rapidly and more often than is the case, and would suggest that they have no compelling interest in communicating with others in their speech community.
2)      Attributing sound change to lenition or fortition is done without much (any?) empirical evidence.  They are like game pieces, played at will. 
Why am I pushing the idea of listener-based sound change?    
1)       A wider range of sound changes may be explained by this theory and furthermore, empirical evidence for this account may be (and has been) obtained through controlled experimental work.
2)      It characterizes sound change as non-teleological and thus re-affirms the fundamental principle that we speak in order to communicate (and this requires adhering to the pronunciation norms of the speech community).  Sound change, therefore, is rare.

Thursday, July 14, 2011

new posts 14 July 2011

Here are the links for:

Is phonetics complicated?
https://docs.google.com/leaf?id=0B09lW9XhoFydM2FiZWQ4MGYtMTdmMC00NmJiLWE3ZDAtM2MyN2U2M2FmYjM3&hl=en_US/

Lecture 2 (spanning last Monday and today (Thuresday) presentations
https://docs.google.com/leaf?id=0B09lW9XhoFydMDkxOWVjYmQtYjJkNS00ZTgwLTlmNTYtOTE0MzFlNjE5MWI3&hl=en_US

My paper on 'Clear Speech", testing the 'contrastiveness hypothesis:

https://docs.google.com/viewer?a=v&pid=explorer&chrome=true&srcid=0B09lW9XhoFydYmYyOWVhNTItNzYxMi00OTNkLTlkZDgtOTI3N2ZkOTFjNDI4&hl=en_US

Tuesday, July 12, 2011

Assignment 1


Assignment  1:  Consonantal Place Assimilation
In the paper “The phonetics and phonology of aspects of assimilation” (one of the readings for this course),  it is noted that assimilation of place of articulation in medial –CC- clusters is very common,.  A few examples from Italian and other languages are given:
                Late Latin                             Italian
                noktu                                    notte     “night”
                oktu                                       otto       “eight”
and
                primu tempus                   Fr.  printemps  “Spring”
Traditionally these were explained as due to the speaker, being lazy, simplifying the complex heterorganic clusters by making both C‘s at the same place.
Part 1.
 I want you to make your own evaluation of this hypothesis by using Praat to make some heterorganic medial clusters and hearing what they sound like to you – and what they sound like to at least ONE other listener who is not in this course and who has not read the cited paper.  You can use your own speech samples if you wish or you can download a .wav file with the VCV utterances [apa ata aka ipi iti iki upu utu ukju] (here’s the link: )
https://docs.google.com/leaf?id=0B09lW9XhoFydOGNhMWRiYWUtZGI3ZS00ZjNhLWI0OTEtYTU0ODY1NTJmYTk2&hl=en_US
(The recording is not very good; it is noisy and has annoying clicks at beginning and start – these are a consequence of my having to increase the amplitude above what my ‘cheapo’; microphone provided.  Sorry.)
What I want you to do is relatively simple:
Make at least 4 different heterorganic medial clusters by combining the VC- from one of the VCV utterances with the –CV of another (where the place of articulation of the two C’s are different).  For the ‘default’  -VC1C2V-,  the duration of the medial silence should be the average of the two original medial stops.  If you splice the VC- and –CV exactly in the middle of the stop silence this is what you will have.    Ultimately you will be presenting your Praat-created heterorganic medial clusters to some listener in addition to yourself:  this listener should be naive (as to what you’ve done to make the samples).  But before you do that I’d like you to do at least ONE innovative variation on this experiment:  your choice, but here are some suggestions:
1.       Does it make any difference (as to what you and the other listener report hearing)  if the duration of the medial silence is varied?  (But the medial silence should not be shorter than 60 msec and not be longer than the combined duration of the original C durations from which the samples were taken).
2.       If you decide to do this experiment with your own speech samples:  does it make any difference (as to how they are perceived) if voiced stops are used instead of voiceless stops?
3.       As we all know, the place of articulation of a prevocalic stop is cued by the spectral properties of the burst AND the transition.  Does it make any difference to the percept if one cuts off the burst and leaves the transition?   (Since the speech sample I provide is Am. English, the stops’ releases have a burst and a period of aspiration following; if you opt to do this elaboration, you’ll have to be judicious in separating the burst from the aspiration since part of the transition is manifested during the aspiration.)
4.       Something else that I haven’t thought of.
I also leave up to your best judgment whether you present your samples to the other listener in some order or randomly. 
Part 2.
Here is a link to a .wav file with the utterances [ampa   anta  aŋka]. 
https://docs.google.com/leaf?id=0B09lW9XhoFydOGNhMWRiYWUtZGI3ZS00ZjNhLWI0OTEtYTU0ODY1NTJmYTk2&hl=en_US
In this case please use this speech sample for your manipulations, not your own.  From these you can create a total of 6 heterorganic –NC- clusters, i.e., labial  + alveolar, labial + velar, alveolar + labial, alveolar + velar, velar + labial, velar + alveolar.   In this case keep the duration of the voiceless stop to 50 msec.  How do you and your other listener(s) identify these creations?  The crucial question, of course, is whether they hear homorganic or heterorganic –NC- clusters.
There will not be enough data for a statistical analysis but I want to you report what you did and what the results were (what you heard and what your minimum one subject heard) and to interpret them with respect to the ‘lazy speaker’ hypothesis.  Due date:  25 July.

Sunday, July 10, 2011

on u > v; history of discovery of language families

Hsin-Chang Chen writes:

Since my master's career in Taiwan inspired by my advisor, I've been
doing some research on finding microscopic sound change implications
(implicational scaling) across hundreds of Chinese dialects from IPA
transcriptions published by different authors.  One of my findings about
Mandarin dialects, which I don't know how to explain phonetically yet, is:
the rhyme [u] conditionally becomes syllabic [v] in the following order:
first u > v (without consonant initial), then fu > fv, then kv
and khv (unaspirated and aspirated k), then xv (or hv) and finally u becomes
v after all other consonant initials (no distinctions are attested).  There
are no exceptions.  The dialects (at least seven in number, depending on how
you count them) found with u > v are found hundreds or thousands of miles
apart with dialects that don't show u > v in the areas separating them.  I
wonder if this sound change implication can be phonetically universal and
how we can account for it phonetically.

Ohala replies: 

That it is a high vowel that is subject to this change can be explained by aerodynamic principles.  This is covered briefly in the first of the course readings “The origin of sound patterns in vocal tract constraints.”  See pp. 204ff.  There is another paper that goes into this pattern that is not on my home page, but here is a link:
An interesting aspect of the sound pattern you found is that the [u] which has two constrictions, one at the tongue dorsum and another at the lips, when it changes to a voiced approximant, only manifests itself at the labial place, not the dorsal.  This pattern can be explained by acoustic principles. See the paper on my home page:  “The story of [w].. (1977);  here’s the link:
As for the order of the implicational hierarchy, more research would have to be done.  My suspicion is that acoustic factors might be involved.  This would best be done on a language/dialect that had similar sounds and sound sequences but which has not exhibited these sound changes.  In such a study one looks for the “seeds”: of sound change.

7/7 class:
You didn't mention the discovery of the Finno-Ugric language family.  I
remember reading somewhere (probably in one of the books on that language
family) that the relationship of Sami, Finnish/Estonian and Hungarian was
found earlier than the discovery of the IE language family.  But it could be
false.  A quick search on Wikipedia didn't turn out much.  Also, it would
be awesome if you could mention in passing when the other major language
families of the world were discovered.

Ohala replies:
There are a dozen or so major language families and, quite frankly, I do not control the literature on the history of their discovery, which includes convincing empirical (statistical) evidence.  As in the case of Indo-European, such discovery is a gradual process involving many individuals contributing their evidence at different times.  You are correct that some of the early discoveries of the Finno-Ugric family came in the late 18th c.  Although there had long been a recognition of the relation of Hungarian and Finnish (and other languages in the Baltic region (Estonian), a rather dramatic breakthrough came in 1770 when János Sajnovics published his Demonstatio… [I have an orig. ed. In my collection]  showing that Sami, a language spoken in the north of Norway, was also related to Hungarian.  His work was built on and elaborated by Johan Ihre (whom I mentioned in class) in 1772 and especially by Samuel Gyarmathi in 1799.

Saturday, July 9, 2011

slides from 7 july 2011

Here's the link for a pdf version of last thursday's slides:

https://docs.google.com/viewer?a=v&pid=explorer&chrome=true&srcid=0B09lW9XhoFydMjdjMTYwYTYtMzY1MC00NDBlLTk5MTYtNmUwYWJiZGFmM2Nm&hl=en_US

Thursday, July 7, 2011

LINKS to stuff mentioned in class 7 July 2011


I mentioned in class a paper by Bill Poser on Thomas Young’s proposal that Basque and Egyptian are historically related.  Here is the URL for that paper:  http://www.billposer.org/Papers/young.pdf
And, if you are serious about wanting to know the history ideas and accomplishments in diachronic phonology, you owe it to yourself to at least look at the Wikipedia (what a wonderful resource!) for:
ten Kate: http://nl.wikipedia.org/wiki/Lambert_ten_Kate   this is in Dutch but you can avail yourself of Google’s translation if you wish.
as well as this account by GERRIT H. JONGENEELEN of ten Kate’s linguistic contributions:  http://home.wanadoo.nl/vvdghj/KV/ch20s06.html

de Brosses:  http://en.wikipedia.org/wiki/Charles_de_Brosses

James Burnet, Lord Monboddo:  http://en.wikipedia.org/wiki/James_Burnett,_Lord_Monboddo 

Hervas: http://en.wikipedia.org/wiki/Lorenzo_Herv%C3%A1s_y_Panduro

Ihre:  http://en.wikipedia.org/wiki/Johan_Ihre

Thomas Young:  http://en.wikipedia.org/wiki/Thomas_Young_%28scientist%29

This Wikipedia entry also mentions his offering a universal phonetic alphabet which I neglected to mention in my lectures.  But I have an original edition of his Göttingen dissertation (1796; "De corporis hvmani viribvs conservatricibvs. Dissertatio.") where this alphabet is presented.  You’ll have to visit me in Berkeley to see it.

More on the kymograph (this entry translated from a German Wikipedia entry) correctly attributes its invention to Thomas Young (the main English Wikipedia entry does not)

More on Étienne-Jules Marey who pioneered in the physiology of systems that moved
: 

There are also several short films re-creating Marey’s studies of animal motion on YouTube.

I didn’t mention it in class but he was also the inventor of ‘Marey’s capsule’, a device for transducing movements or air pressure into movements of the stylus of a kymograph.  Marey’s capsule figures prominently in late 19th c. and early 20th c. phonetic studies.