How would you diagram this English sentence?

Discussion of natural languages, or language in general.
Rory
Lebom
Lebom
Posts: 226
Joined: Sun Jun 15, 2003 4:37 pm
Location: Scotland
Contact:

Re: How would you diagram this English sentence?

Post by Rory »

If these ideas are interesting to you, I suggest you read up on Construction Grammar, and possibly also Evolutionary Phonology. They very much abandon the formal apparatus of the past in accounting for synchronic behavior - it just is, because it evolved that way.
The man of science is perceiving and endowed with vision whereas he who is ignorant and neglectful of this development is blind. The investigating mind is attentive, alive; the mind callous and indifferent is deaf and dead. - 'Abdu'l-Bahá

merijn
Lebom
Lebom
Posts: 207
Joined: Thu Dec 21, 2006 10:36 pm
Location: Utrecht Overvecht

Re: How would you diagram this English sentence?

Post by merijn »

"Elegant" is how Ockham's razor translates into linguistics. If you have completely separate rules for separate constructions it gets really hard to falsify the theory, because if you have an exception you can just make up a new set of rules, whereas if you strive to handle lots of different cases in one set of rules, an exception is really counter-evidence.

My arguments are all based if you allow for movement (hence underlying order) but I believe a system without movement would still call SOV the default order rather than SVO, but I have never been really that much interested in word order or in Germanic languages, so I don't know how exactly the word-order of German and Dutch is dealt with in other frameworks. But within my framework, the evidence that the underlying word-order is SOV for Dutch and German is overwhelming. To give you another piece of evidence: in Dutch there is a verb hoeven "need to" that is a negative polarity item, that is, it must fall under the scope of negation (or to be more precise under the scope of a quantifier that is downward entailing). Scope is often thought to be closely related to c-command. In the sentence Ik hoef hem niet te zien "I don't need to see him" the most natural way to bracket the sentence (ignoring movement for now ) is [ik[hoef [hem [niet [te zien]]]], here the negation niet doesn't c-command hoef. This is consistent with the idea that hoef is base generated from a lower position.

In this particular case I don't think that the apparent movement is the result of a diachronic process, since IIRC the V2 order is older than the strict OV order. I don't know what the default order was in earlier Dutch and German, but VO of non-finite verbs was quite common in Old-Dutch. The same applies for the article that I am working on now where the order with less movement is probably an innovation ( and it is quite easy to see how it may be a result diachronically of a construction other Bantu languages have but Zulu doesn't) whereas the construction with the movement goes back to Proto-Bantu. Movement is needed to explain why words end up in other positions than where you would expect them to end up based on the semantics. In "who do you think I saw yesterday" who is the patient of "saw" but it is pronounced at the beginning of the sentence. Any framework needs to deal with this. How does Tomasello's framework deal with it?
People forget that a diachronic explanation cannot replace a synchronic explanation, at least not language-internally, just as a synchronic explanation cannot replace a diachronic explanation. I do think that diachronic views can explain certain typological tendencies (why are languages that have post-posistions often SOV for instance) though. But for a specific language you still need to explain why the word order ends up the way it ends up synchronically, how a pronounced sentence can be combined with a certain meaning.

I have nothing against construction grammars. I think the idea they are based on ( that is constructions are stored as words in our mind, with their own meaning) is very interesting and I think it could very well be true. But I don't come across most of them, with the exception of one article in Construction Grammar (that is a type of construction grammar, I know it's confusing). Some other frameworks, such as LFG, HDPG and Dynamic Syntax are much more common. The articles that I have read (and those are the ones I have specifically searched for) seem to be more focused on the theoretical and semantic grounding of the analysis than on the analysis itself. I have the impression that certain stripes of construction grammar are more used for semanticians working in cognitive linguistics to deal with the syntactic side of their analyses than that they are used by syntacticians themselves. I don't think the tools are ready yet to analyze the things I want to analyze. I also think if construction grammar wants to take over the linguistic world it would be foolish to ignore the work Generative Grammarians have done over the years, for instance on word order in Dutch and German (one of the most discussed subjects in Generative Grammar), and it should incorporate somehow their insights into their framework.
I am far from a specialist on child-language, and it could very well be true that Tomasello is right. At the moment I believe that there is something like UG, but that is very small, and may only exist of the idea that sentences are built from smaller units, in a construction grammar this could be stated as the idea that a construction may consist of smaller constructions. Other universal features ( such as the three domains I mentioned in the post above) could be the result of general cognitive or logical structures.

PS I have been told that in English, in the two word phase children say "eat cookie", but Dutch children say "koekje eten" which means "cookie eat". Make of that what you will.

User avatar
jal
Sumerul
Sumerul
Posts: 2633
Joined: Tue Feb 06, 2007 12:03 am
Location: Netherlands
Contact:

Re: How would you diagram this English sentence?

Post by jal »

Dutch (I'm not sure about German) of course also has VSO: "Gisteren zag ik de hond". You can call that V2, but that's just an umbrella for SVO/VSO, while it doesn't cover the common SOV...


JAL

zompist
Boardlord
Boardlord
Posts: 3368
Joined: Thu Sep 12, 2002 8:26 pm
Location: In the den
Contact:

Re: How would you diagram this English sentence?

Post by zompist »

merijn wrote:"Elegant" is how Ockham's razor translates into linguistics. If you have completely separate rules for separate constructions it gets really hard to falsify the theory, because if you have an exception you can just make up a new set of rules, whereas if you strive to handle lots of different cases in one set of rules, an exception is really counter-evidence.
The problem with syntax, for thirty years, has been that linguists stop arguing about syntax and instead start lecturing each other about philosophy of science. The thing is, Tomasello's Constructing a Language is precisely an attack on Chomskyan linguistics as being uninterested in evidence (there is no evidence from child acquisition that children flip parameter switches rather than learning constructions) and entirely unfalsifiable (whenever the data are difficult you can just make up a new parameter).

I recommend reading the book before attempting to debate it. But at the least, please understand that it is evidence-based. He looks at the progression of child language, something Chomsky never does, and he finds that it just does not support Chomskyan declarations.
In this particular case I don't think that the apparent movement is the result of a diachronic process, since IIRC the V2 order is older than the strict OV order.
History is just one possible explanation for syntactic patterns, as I said. E.g. there's also analogy, borrowing, etc.
In "who do you think I saw yesterday" who is the patient of "saw" but it is pronounced at the beginning of the sentence. Any framework needs to deal with this. How does Tomasello's framework deal with it?
Very easy: (English-speaking) children learn to say questions that way because that's what they hear! "Where is Daddy? What is the bear doing? Who is he talking to?"

Mind you, I don't expect a few postings or even a whole book to change your mind, necessarily. In syntax it's understandable to stick with the framework you know. I've read a lot more generative than cognitive linguistics, so even though I agree with the latter more, it's easier to produce analyses with traditional tree structures. At the same time, recognize that other approaches exist and sometimes people just don't follow the same heuristics. I was really reacting against the "simplicity" criterion, which at the very least ignores the complexity of positing move rules in the first place. And which begs the question of whether the brain really cares about minimizing rules in that way.

As a software developer, I've always thought that linguists are over-enamored of algorithms. Chomsky started with very simple language generators, of the sort that computers are good at implementing. But the brain is not a computer! It does not have a single processor, it doesn't separate program and data, it just doesn't reward the sort of procedural algorithms we develop for computers. What the brain is really good at is distributed processing and pattern recognition. Learning constructions separately, as Tomasello talks about, fits the brain very well.

I remember some article where a linguist proposed some clever ways of reducing the burden on the mental lexicon, and it seemed quite wrong-headed to me. That's "simple" if you're writing a computer program: refactor any shared information to reduce the program size. It's not simple if you have processors to burn. Lexical lookup in the brain might be a matter of sending a signal to neurons representing every single word and seeing which one fires. And there could be any amount of data attached to a lexical entry. And for that matter, refactoring representations as new knowledge comes in might be more difficult, not easier, in such a structure. In a distributed structure, it's always easy to add more rules and more exceptions, but very hard to redo the earliest bits.

User avatar
Radius Solis
Smeric
Smeric
Posts: 1248
Joined: Tue Mar 30, 2004 5:40 pm
Location: Si'ahl
Contact:

Re: How would you diagram this English sentence?

Post by Radius Solis »

For a further line of argument:
zompist wrote:In a distributed structure, it's always easy to add more rules and more exceptions, but very hard to redo the earliest bits.
This is true of all kinds of emergent structures, including evolution (e.g. bigger skulls are easy, but our basic body design has changed little in 400 million years) and culture (table manners go all over the place but the practice of families eating together has continued since time immemorial) and even the designs of cities and computers and many other things that were human-created (surely we've all seen that old lore about railroad gauge being descended from Roman chariots). For this reason, I think any examination of how our brains process language should not disregard any evidence that might be available from how animal brains handle their various calls and cries - our language capacity is presumably evolved from a comparable system, after all.

And therefore it seems reasonable to me, when presented with a proposal for what parts of our language are passed on genetically, to ask whether any seeds of such a thing can be found in how other apes call to each other. So, for instance, by that light the proposal that there is some kind of instinct that drives us to communicate vocally is immediately supported - whereas, thus far, the general consensus among linguists seems to be that apes show no sign of having any syntax at all. That may be arguable, but if is true, then how much parameter-based syntax can evolution really have managed to cram into our brains in the four-ish million years since we've been separated from the chimps? Not to mention: why would it bother? What evolutionary advantages could a minimal proto-UG system have presented such that it was able to outcompete the more obvious solution of repeating and building on whatever patterns you hear?

User avatar
jal
Sumerul
Sumerul
Posts: 2633
Joined: Tue Feb 06, 2007 12:03 am
Location: Netherlands
Contact:

Re: How would you diagram this English sentence?

Post by jal »

Quite often, one can describe what is seen in nature by models, even if those models are not the basis of what we see. So even if a certain model, no matter how complex, can give the same output as a naturally derived outcome, the model can be said to "fit" or "acurately describe" the natural situation. Also, the fact that the brain learns by adding pieces to the puzzle and cannot "comprehend" the whole picture until it has all the pieces, does not refute the finished puzzle; translated to language: the fact that children do not learn language closely along the lines of a supposed model, does not refute that model, as long as the final outcome (adult language) fits.

Note that I know next to nothing about formal syntactic theories, but observing this discussion, I thought the above needed mentioning.


JAL

merijn
Lebom
Lebom
Posts: 207
Joined: Thu Dec 21, 2006 10:36 pm
Location: Utrecht Overvecht

Re: How would you diagram this English sentence?

Post by merijn »

zompist wrote:
merijn wrote:"Elegant" is how Ockham's razor translates into linguistics. If you have completely separate rules for separate constructions it gets really hard to falsify the theory, because if you have an exception you can just make up a new set of rules, whereas if you strive to handle lots of different cases in one set of rules, an exception is really counter-evidence.
The problem with syntax, for thirty years, has been that linguists stop arguing about syntax and instead start lecturing each other about philosophy of science. The thing is, Tomasello's Constructing a Language is precisely an attack on Chomskyan linguistics as being uninterested in evidence (there is no evidence from child acquisition that children flip parameter switches rather than learning constructions) and entirely unfalsifiable (whenever the data are difficult you can just make up a new parameter).

That generative linguists don't look at the facts is not true. I am a generative linguists and I look at the facts. The paper we are arguing against by a generative linguist looks at the facts. All the generative linguists I know look at the facts, and let the facts drive their argumentation. And that is even true for Chomsky. Go to lingbuzz and you'll see that most articles are all about explaining the facts, and not about the philosophy of science. I don't know a lot about child linguistics, but I can assure you that generative linguists in that area also look at the facts, and that according to them there is actual factual evidence for a UG. It is precisely that attitude wherein you dismiss decades of work by just saying "oh those generative grammarians are people who are not paying attention to the evidence so it's ok to ignore them" that is killing linguistics (and I have seen similar claims by generative linguists about functionalists that I find equally disgusting).
About falsibility: our paper is falsifiable. As a matter of fact, Thursday we are going to our informant to check if the last examples we have added to the paper is true. If it isn't true, then it is for us, and for a few other linguists working on Zulu, back to square one.Our theory will have been falsified. (we are quite certain that it is true, for one it collaborated by google searches). It is also precisely because our analysis is relatively simple that makes it falsifiable. It is true that as a bigger model of grammar (not talking about language acquisition), generative linguistics is impossible to falsify, but the same is true for construction grammar. It is the analyses couched in these frameworks that are falsifiable. A framework is successful in my eyes if you can couch relatively simple analyses that explain a wide range of data that are falsifiable in them, and is unsuccessful if you need a lot of ad-hoc explanations to explain the data (for instance adding a parameter for every problem, something I haven't seen done in at least fifteen years)
I recommend reading the book before attempting to debate it. But at the least, please understand that it is evidence-based. He looks at the progression of child language, something Chomsky never does, and he finds that it just does not support Chomskyan declarations.
Again, generative linguists, including Chomsky, do look at the evidence. It is true that Chomsky has written relatively little about acquisistion, but there are plenty of other generative linguists who do, and they find support for (some of) Chomsky's ideas. I am not hostile to the idea that Chomsky is wrong when it comes to language acquisition, but I am hostile to the attitude that generative grammar is not evidence based.
In this particular case I don't think that the apparent movement is the result of a diachronic process, since IIRC the V2 order is older than the strict OV order.
History is just one possible explanation for syntactic patterns, as I said. E.g. there's also analogy, borrowing, etc.
Those are all diachronical explanations, and they can't replace synchronical explanations. Let me give you an comparison with biology (not mine by the way). Birds have wings. You can give two kinds of explanations for that fact. One is that they evolved wings because it was an advantage to fly. The other is that they have wings because they have certain genes which trigger certain amino-acids which eventually makes them grow wings. You'll never find a biologist who believes that one kind of explanation nullifies the need for the other kind of explanation, yet you find linguists doing that all the time. Similarly with syntax. For the Dutch sentence "hem zag ik gisteren" meaning "Him I saw yesterday" we need to explain why speakers don't say "hem ik gisteren zag" as well as why speakers and hearers interpret "hem" as the object of "zag", and see "hem" as the topic. That is regardless of its origin.
In "who do you think I saw yesterday" who is the patient of "saw" but it is pronounced at the beginning of the sentence. Any framework needs to deal with this. How does Tomasello's framework deal with it?
Very easy: (English-speaking) children learn to say questions that way because that's what they hear! "Where is Daddy? What is the bear doing? Who is he talking to?"
I am not talking about the question how they learn it, but by what mechanism do they interpret it?
Mind you, I don't expect a few postings or even a whole book to change your mind, necessarily. In syntax it's understandable to stick with the framework you know. I've read a lot more generative than cognitive linguistics, so even though I agree with the latter more, it's easier to produce analyses with traditional tree structures. At the same time, recognize that other approaches exist and sometimes people just don't follow the same heuristics. I was really reacting against the "simplicity" criterion, which at the very least ignores the complexity of positing move rules in the first place. And which begs the question of whether the brain really cares about minimizing rules in that way.
I am aware of other frameworks, that is why I said in my post "in my framework the evidence for SOV is overwhelming". I do worry that if a framework doesn't strive for simplicity, its analyses are very hard to be falsified, and it will be harder to choose why one analysis is to be preferred over the other, if they explain the same facts..
As a software developer, I've always thought that linguists are over-enamored of algorithms. Chomsky started with very simple language generators, of the sort that computers are good at implementing. But the brain is not a computer! It does not have a single processor, it doesn't separate program and data, it just doesn't reward the sort of procedural algorithms we develop for computers. What the brain is really good at is distributed processing and pattern recognition. Learning constructions separately, as Tomasello talks about, fits the brain very well.

I remember some article where a linguist proposed some clever ways of reducing the burden on the mental lexicon, and it seemed quite wrong-headed to me. That's "simple" if you're writing a computer program: refactor any shared information to reduce the program size. It's not simple if you have processors to burn. Lexical lookup in the brain might be a matter of sending a signal to neurons representing every single word and seeing which one fires. And there could be any amount of data attached to a lexical entry. And for that matter, refactoring representations as new knowledge comes in might be more difficult, not easier, in such a structure. In a distributed structure, it's always easy to add more rules and more exceptions, but very hard to redo the earliest bits.
I kind of agree with you here. There is a tendency to try to explain the lexicon away with certain linguists, and I don't think that that is necessary.

I do think that Mainstream Generative Grammar is due to a big overhaul. There are some things that have grown over the years that are not logical. And I think it would be useful to look at some ideas developed in other frameworks such as construction grammars (and I know there are people trying to do exactly that), perhaps including some of Tomasello's ideas about language acquisition. But I am hostile to the idea that Generative linguists do not care about the evidence. And I have also my doubts about the idea that simplicity shouldn't matter at all. (It would surprise me if people in various types of construction grammar wouldn't prefer a simpler explanation to a more complicated one, they are scientists after all)

User avatar
jal
Sumerul
Sumerul
Posts: 2633
Joined: Tue Feb 06, 2007 12:03 am
Location: Netherlands
Contact:

Re: How would you diagram this English sentence?

Post by jal »

merijn wrote:I am aware of other frameworks, that is why I said in my post "in my framework the evidence for SOV is overwhelming". I do worry that if a framework doesn't strive for simplicity, its analyses are very hard to be falsified, and it will be harder to choose why one analysis is to be preferred over the other, if they explain the same facts..
As long as the framework is an artificial model for making predictions about language, I'm fine with that. As long as one (and I'm not saying you do) does not extrapolate that to claiming that the framework is actually underlying the mental patterns, or the working of the mind in general.


JAL

merijn
Lebom
Lebom
Posts: 207
Joined: Thu Dec 21, 2006 10:36 pm
Location: Utrecht Overvecht

Re: How would you diagram this English sentence?

Post by merijn »

jal wrote:
merijn wrote:I am aware of other frameworks, that is why I said in my post "in my framework the evidence for SOV is overwhelming". I do worry that if a framework doesn't strive for simplicity, its analyses are very hard to be falsified, and it will be harder to choose why one analysis is to be preferred over the other, if they explain the same facts..
As long as the framework is an artificial model for making predictions about language, I'm fine with that. As long as one (and I'm not saying you do) does not extrapolate that to claiming that the framework is actually underlying the mental patterns, or the working of the mind in general.


JAL
It is primarily a model for making predictions about language, but it should lead to a model about the mind. I fail to see why the strive for having as little ad hoc explanations as possible is wrong for a scientific model of the mind. Doesn't science work by starting with the simplest model and than making a model that is only as complicated as needed?

zompist
Boardlord
Boardlord
Posts: 3368
Joined: Thu Sep 12, 2002 8:26 pm
Location: In the den
Contact:

Re: How would you diagram this English sentence?

Post by zompist »

merijn wrote:
zompist wrote:The problem with syntax, for thirty years, has been that linguists stop arguing about syntax and instead start lecturing each other about philosophy of science. The thing is, Tomasello's Constructing a Language is precisely an attack on Chomskyan linguistics as being uninterested in evidence (there is no evidence from child acquisition that children flip parameter switches rather than learning constructions) and entirely unfalsifiable (whenever the data are difficult you can just make up a new parameter).

That generative linguists don't look at the facts is not true. I am a generative linguists and I look at the facts. The paper we are arguing against by a generative linguist looks at the facts. All the generative linguists I know look at the facts, and let the facts drive their argumentation. And that is even true for Chomsky. Go to lingbuzz and you'll see that most articles are all about explaining the facts, and not about the philosophy of science. I don't know a lot about child linguistics, but I can assure you that generative linguists in that area also look at the facts, and that according to them there is actual factual evidence for a UG. It is precisely that attitude wherein you dismiss decades of work by just saying "oh those generative grammarians are people who are not paying attention to the evidence so it's ok to ignore them" that is killing linguistics (and I have seen similar claims by generative linguists about functionalists that I find equally disgusting).
If you're so upset with suggestions of unfalsifiability, then maybe you shouldn't lead with that accusation when you hear about a new theory. Without knowing a thing about Tomasello's theory, you immediately assumed it was unfalsifiable.

But really, calm down and respond to what I'm saying, not to the imagined attack on all of generative linguistics. You are talking as if someone has tossed out Occam. You've misunderstood. You originally made a comparison of underlying-SOV and underlying-SVO analyses that relied on counting the number of rules. My main point is that both analyses throw in one huge bit of complexity— namely, positing an underlying order at all, with the requirement of a move transformation. Tomasello's framework does not require these things.

There's also the issue I raised about computers vs. brains: you can't gauge simplicity by how easy something is to model using a procedural computer. Your intuitions about simplicity are a poor guide to what a distributed neural net can do. This doesn't mean that we're throwing out simplicity; it means we have to use the proper notion of simplicity.

On a deeper level, Occam requires not just simplicity but explaining the known facts. You say that Chomskyan syntax "looks at the facts"; can you tell me what are the facts underlying these claims?

1. that syntactic parameters are encoded in the genome
2. that in producing or interpreting a sentence, a move transformation is made
3. that children learning the language determine and set the parameters for their language

I'll give my own answers:

1. Absolutely no evidence.
2. People have made experiments looking for this, going back to the '70s, and not found anything. (I may not be up to date on this though.)
3. This is contradicted by Tomasello's research. There is not a point where, say, children learn that their language is pro-drop and thereafter get the pronouns right. Children learn pronouns slowly and with great difficulty, and the ones they learn first are the ones they hear the most often.

(1 and 3 don't apply to all of generative linguistics, only to later iterations of Chomsky's particular brand.)

There's a fallback position, of course, that a grammar doesn't have to model human linguistic processing— it just has to have tolerably similar outputs. That's fine; we often use approximate-but-wrong models, such as the idea that the surface of a mirror reflects light. I find transformations to be very useful in a grammatical description, for instance, while being agnostic about whether they are actually part of what the brain does. But note that if it's "only a model", then it should not make false claims about what is innate and what children do.

Of course, if you're only going to model normal adult competence, you have to expect that the model will be worthless in many areas— such as dealing with children or impaired adults. Some of the most interesting work is being done in acquisition studies. And if you're interested in falsification, you should be interested in this area too, as it's where models based on adult competence are most likely to fail and be replaced by something better.

User avatar
linguofreak
Lebom
Lebom
Posts: 123
Joined: Mon Jun 13, 2005 10:39 pm
Location: Somewhere
Contact:

Re: How would you diagram this English sentence?

Post by linguofreak »

zompist wrote:There's a fallback position, of course, that a grammar doesn't have to model human linguistic processing— it just has to have tolerably similar outputs. That's fine; we often use approximate-but-wrong models, such as the idea that the surface of a mirror reflects light. I find transformations to be very useful in a grammatical description, for instance, while being agnostic about whether they are actually part of what the brain does.
Here's a question:

Is "what the brain does" even one thing? Is it possible that there are multiple, partially genetically determined ways that the brain analyzes language? (I don't quite believe that syntactic parameters are encoded in the genome, but there may be genetic influence on how the brain analyzes a given syntactic structure and reconstructs syntactic parameters from it).

Is transformational grammar the result of self-analysis by a certain subset of the population for which transformational grammar actually is a good description of "what the brain does"? Does child language acquisition among future Chompskian linguists differ from child language acquisition among future MBA's?

merijn
Lebom
Lebom
Posts: 207
Joined: Thu Dec 21, 2006 10:36 pm
Location: Utrecht Overvecht

Re: How would you diagram this English sentence?

Post by merijn »

zompist wrote:
merijn wrote:
zompist wrote:The problem with syntax, for thirty years, has been that linguists stop arguing about syntax and instead start lecturing each other about philosophy of science. The thing is, Tomasello's Constructing a Language is precisely an attack on Chomskyan linguistics as being uninterested in evidence (there is no evidence from child acquisition that children flip parameter switches rather than learning constructions) and entirely unfalsifiable (whenever the data are difficult you can just make up a new parameter).

That generative linguists don't look at the facts is not true. I am a generative linguists and I look at the facts. The paper we are arguing against by a generative linguist looks at the facts. All the generative linguists I know look at the facts, and let the facts drive their argumentation. And that is even true for Chomsky. Go to lingbuzz and you'll see that most articles are all about explaining the facts, and not about the philosophy of science. I don't know a lot about child linguistics, but I can assure you that generative linguists in that area also look at the facts, and that according to them there is actual factual evidence for a UG. It is precisely that attitude wherein you dismiss decades of work by just saying "oh those generative grammarians are people who are not paying attention to the evidence so it's ok to ignore them" that is killing linguistics (and I have seen similar claims by generative linguists about functionalists that I find equally disgusting).
If you're so upset with suggestions of unfalsifiability, then maybe you shouldn't lead with that accusation when you hear about a new theory. Without knowing a thing about Tomasello's theory, you immediately assumed it was unfalsifiable.
I reacted against the idea that elegance is wrong. If you throw the idea of elegance, that is the use of as little ad-hoc mechanisms as is needed, out of the window, you throw falsifiability out of the window. If Tomasello says that all things equal, an explanation which argues that all different types of wh-questions, where the wh-constructions corresponding to different positions (what did you see, what do you think I saw, what are you most afraid of) correspond to different, unrelated constructions is to be preferred over an analyses in which there is one generalized construction (say (and I don't know how his framework formalizes it) [wh-word auxiliary [sentence (gap)]]), even though speakers first learn the specific construction and later generalize it to other cases, then his theory is hard to falsify as a theory of grammar. I don't think that linguists working in construction grammar, which is apparently the type of framework Tomasello is working in, choose the unfalsifiable route, I expect them to prefer the analysis with the generalized constructions to the analysis with the specific constructions. You however said that one analysis being more elegant (having less ad-hoc rules) should not be a reason to prefer it to less elegant analyses. That in my opinion is a very foolish opinion.
But really, calm down and respond to what I'm saying, not to the imagined attack on all of generative linguistics.
Well, you said that Chomskyan linguistics was not interested in evidence. I can only interpret it as Chomskyan linguists being uninterested in evidence. Also you say linguists do not argue about syntax anymore and are now just lecturing each other about the philosophy of science. Again, I can only read that sentence by including me.
You are talking as if someone has tossed out Occam. You've misunderstood. You originally made a comparison of underlying-SOV and underlying-SVO analyses that relied on counting the number of rules. My main point is that both analyses throw in one huge bit of complexity— namely, positing an underlying order at all, with the requirement of a move transformation. Tomasello's framework does not require these things.
First of all, to go back to the original point, the SOV analysis is still simpler than the SVO analysis. I also said that within the framework it is the best analysis, because it is the simplest analysis within the framework.
Tomasello doesn't need transformations, but I haven't come across any framework where the analysis of sentences is simple, including those where there are no transformations. To state it in the terminology of construction grammar, most sentences do not correspond to a single construction, but consist of several of them all interacting. And you also have to take the meaning in account. How do you explain scope, (the difference between "every man kissed his wife" and "his wife was kissed by every man"), how do you explain the meaning of long distance wh-movement, where the wh-constituent doesn't correspond to it being an argument of the main clause. These are all questions that need to be answered, and that need certain operations that complicate the framework quite a bit. In HDPG for instance there are very complicated feature boxes with complicated rules regarding them that deal with most of these grammatical issues. It could be that Tomasello's framework manages to be simpler, but I somehow doubt that that is the case if it is capable of explaining all these complicated issues.
There's also the issue I raised about computers vs. brains: you can't gauge simplicity by how easy something is to model using a procedural computer. Your intuitions about simplicity are a poor guide to what a distributed neural net can do. This doesn't mean that we're throwing out simplicity; it means we have to use the proper notion of simplicity.


I don't advocate simplicity because that is a better model of the brain, but because simpler analyses are more falsifiable. I agree that some generative linguists are a bit too concerned about minimalizing memory for instance, and try to decompose to much; there is a point where the idea that different constructions or words are different underlyingly is simpler than one where they have a far fetched unified analysis, in which there are complicated structures needed whereas saying it is all encoded in the lexicon the much simpler option.
On a deeper level, Occam requires not just simplicity but explaining the known facts. You say that Chomskyan syntax "looks at the facts"; can you tell me what are the facts underlying these claims?

1. that syntactic parameters are encoded in the genome
2. that in producing or interpreting a sentence, a move transformation is made
3. that children learning the language determine and set the parameters for their language


I'll give my own answers:

1. Absolutely no evidence.
2. People have made experiments looking for this, going back to the '70s, and not found anything. (I may not be up to date on this though.)
3. This is contradicted by Tomasello's research. There is not a point where, say, children learn that their language is pro-drop and thereafter get the pronouns right. Children learn pronouns slowly and with great difficulty, and the ones they learn first are the ones they hear the most often.
Parameters is a bit out-dated concept, (though you find still linguists using them). They have mostly been reduced to lexical differences, mostly of functional categories. To be honest I don't know if there is any evidence on how these are learned, and if the way these are learned is consistent with what linguists say. WRT movement, maybe there is a thing such as movement in the mind, maybe there isn't. But the idea of movement explains a lot of data. If there isn't movement there is something else that corresponds to it, and indeed all the frameworks I know have something that corresponds to it.
There's a fallback position, of course, that a grammar doesn't have to model human linguistic processing— it just has to have tolerably similar outputs. That's fine; we often use approximate-but-wrong models, such as the idea that the surface of a mirror reflects light. I find transformations to be very useful in a grammatical description, for instance, while being agnostic about whether they are actually part of what the brain does.
That is not too far away from how I see generative grammar. But I do see it as an approximation of how the mind works, probably an approximation that is not particularly approximate yet, but still an approximation.


Of course, if you're only going to model normal adult competence, you have to expect that the model will be worthless in many areas— such as dealing with children or impaired adults. Some of the most interesting work is being done in acquisition studies.
A model of normal adult competence is not worthless in dealing with children or impaired adults. A complete model should account for normal adult competence, as well as explain how children learn language and other areas. But if you're model of acquisition cannot explain normal adult competence, it is not a good model.

hwhatting
Smeric
Smeric
Posts: 2315
Joined: Fri Sep 13, 2002 2:49 am
Location: Bonn, Germany

Re: How would you diagram this English sentence?

Post by hwhatting »

Just to add a few angles:
1) The argument of whether a language is underlying SVO or SOV or whatever combination implies that there must be one basic structure from which the other ones derive. But if you posit pattern-based rules, you can posit different situations and say "in situation X, pattern Y applies" without counting one of the patterns as basic. Is there any sysntactic theory that,say, posits several basic patterns that can be filled up by adding elements, but doesn't recognise transformations of the basic patterns?
2) If you want to posit a basic structure, German can be very easily analysed as a) V2 with a topic left of the verb (and the topic slot can be filled by S, O or any other element of the sentence) and b) S left of O, with the main transformations being word-order based question (empty topic slot, i.e. V1) and dependent / embedded clause, including infinitive constructions*), where the verb moves to the final slot. In any case, I've always felt that the basic tree structure NP - VP is not adequate for German; German is more like "the verb vs. everything else":
Basic structure:
T-V-Rest(S/O/X) = indicative sentence; withT = Topic and X any other element not S or V; S/O/X can all occupy the T slot
Word-order-based question:
V-Rest
Dependent structure:
Rest-V

EDIT: The structure for the word-order-based question is also used for imperative constructions.

* ) Another possible way to see infinitive constructions is that they are part of the verb and that in German, non-finite parts of the verb always are the rightmost part of R= "Rest":
Ich habe ihn heute gesehen. (T=S - Vf - R(O-X-Vi)
Ich kann Ihn heute sehen. (T=S - Vf - R(O-X-Vi)
Ich weiß, dass ich ihn heute gesehen habe (T=S - V - R1(R2(C-O-X-Vi)-Vf)
Ich weiß, dass ich ihn heute sehen kann (T=S - V - R1(R2(C-O-X-Vi)-Vf)


On the falsifiability debate: as far as I have seen, all structure/transformation based grammars try to formulate rules and then see whether 1) all sentences produced by these rules are correct and 2) all correct sentences can be explained by these rules. I haven't read Tomasello, but I'd imagine that one could substitute other kinds of falsifiability, e.g. whether a certain order of acquisition can be observed in the field or testing which kind of patterns are elicited by certain stimuli or checking (using, say brain scans or other ways of metering) which kind of patterns are seen as more and which as less complex by speakers etc.

User avatar
Radius Solis
Smeric
Smeric
Posts: 1248
Joined: Tue Mar 30, 2004 5:40 pm
Location: Si'ahl
Contact:

Re: How would you diagram this English sentence?

Post by Radius Solis »

merijn wrote: I reacted against the idea that elegance is wrong. If you throw the idea of elegance, that is the use of as little ad-hoc mechanisms as is needed, out of the window, you throw falsifiability out of the window.
No, you just make it harder to falsify, not impossible. At worst it means there's more to be checked - it has nothing to do with whether they can be checked.
If Tomasello says that all things equal, an explanation which argues that all different types of wh-questions, where the wh-constructions corresponding to different positions (what did you see, what do you think I saw, what are you most afraid of) correspond to different, unrelated constructions is to be preferred over an analyses in which there is one generalized construction (say (and I don't know how his framework formalizes it) [wh-word auxiliary [sentence (gap)]]), even though speakers first learn the specific construction and later generalize it to other cases, then his theory is hard to falsify as a theory of grammar.
Only because it proposes something (underlying structure) is absent - at least that's what I gather by reading this thread - and absences are notoriously hard to prove conclusively. But you can weigh the evidence and see what it's most consistent with.

And, is there anything you would you accept as disproof of underlying structure?


You however said that one analysis being more elegant (having less ad-hoc rules) should not be a reason to prefer it to less elegant analyses. That in my opinion is a very foolish opinion.
Even the layman's statement of Occam's Razor answers this fully: "All else being equal, the simplest explanation tends to be the truest." Right there on the box, it says the product is useful only when all else is equal. When all else isn't equal, the Razor has nothing to operate on, and using it anyway will void your warranty. Fitting the evidence is always to be preferred to elegance, for two reasons: 1. Our goal is to find out the facts, not play with neat-o mental toys; and 2. Elegance is subjective. Two reasonable people can reasonably disagree on which of two theories is most elegant.


WRT movement, maybe there is a thing such as movement in the mind, maybe there isn't. But the idea of movement explains a lot of data. If there isn't movement there is something else that corresponds to it
This is begging the question: you don't get to assume there is movement or something corresponding to it when that's what's under contention in this debate. It's not a Fact. The Fact is that structures in language appear to have strong relationships to each other. Why they do and how it works is entirely arguable.

merijn
Lebom
Lebom
Posts: 207
Joined: Thu Dec 21, 2006 10:36 pm
Location: Utrecht Overvecht

Re: How would you diagram this English sentence?

Post by merijn »

Radius Solis wrote:
merijn wrote: I reacted against the idea that elegance is wrong. If you throw the idea of elegance, that is the use of as little ad-hoc mechanisms as is needed, out of the window, you throw falsifiability out of the window.
No, you just make it harder to falsify, not impossible. At worst it means there's more to be checked - it has nothing to do with whether they can be checked.
If your analysis only explains a very particular set of data, and it has no consequences for other data, it cannot be tested provided that the original data are correct.
If Tomasello says that all things equal, an explanation which argues that all different types of wh-questions, where the wh-constructions corresponding to different positions (what did you see, what do you think I saw, what are you most afraid of) correspond to different, unrelated constructions is to be preferred over an analyses in which there is one generalized construction (say (and I don't know how his framework formalizes it) [wh-word auxiliary [sentence (gap)]]), even though speakers first learn the specific construction and later generalize it to other cases, then his theory is hard to falsify as a theory of grammar.
Only because it proposes something (underlying structure) is absent - at least that's what I gather by reading this thread - and absences are notoriously hard to prove conclusively. But you can weigh the evidence and see what it's most consistent with.

And, is there anything you would you accept as disproof of underlying structure?
I don't think the concept of underlying structure can be disproven (is that a word?), but if there is an alternative that explains more data simpler than that is the better explanation. We are doing work on Zulu within mainstream generative grammar using movement, if somebody else can explain the data in another framework in a simpler way then that framework is the better framework for those particular data. If that framework can explain all data in all languages in a simpler way than that is a better framework overall.
You however said that one analysis being more elegant (having less ad-hoc rules) should not be a reason to prefer it to less elegant analyses. That in my opinion is a very foolish opinion.
Even the layman's statement of Occam's Razor answers this fully: "All else being equal, the simplest explanation tends to be the truest." Right there on the box, it says the product is useful only when all else is equal. When all else isn't equal, the Razor has nothing to operate on, and using it anyway will void your warranty. Fitting the evidence is always to be preferred to elegance, for two reasons: 1. Our goal is to find out the facts, not play with neat-o mental toys; and 2. Elegance is subjective. Two reasonable people can reasonably disagree on which of two theories is most elegant.
Of course evidence is important, but I disagree that it should always be preferred over elegance. If you analyze 10 constructions and you have 10 different analyses I'd say that an analysis that explains 9 different constructions but not the 10th is still to be preferred. Linguistics is work in progress, so it shouldn't bother you that you can't explain everything. There is always the chance that somebody finds a way to explain that 10th construction. Where you draw the line is to some extent a matter of taste.
WRT movement, maybe there is a thing such as movement in the mind, maybe there isn't. But the idea of movement explains a lot of data. If there isn't movement there is something else that corresponds to it
This is begging the question: you don't get to assume there is movement or something corresponding to it when that's what's under contention in this debate. It's not a Fact. The Fact is that structures in language appear to have strong relationships to each other. Why they do and how it works is entirely arguable.
I don't really see this debate about being primarily about movement. Movement is a useful concept in the analysis of languages. It is so useful that I think there is something *like* movement in the mind, something movement is a metaphor for. Other frameworks I have some knowledge of have something that is similar to movement in the range of data they explain, so those frameworks have different metaphors for whatever movement is a metaphor of. What I am trying to say is that the absence of movement exactly as envisioned by generative linguists in the mind is not an argument of not using it in an analysis, and is not even an argument for saying that that analysis says nothing about how the mind works.

zompist
Boardlord
Boardlord
Posts: 3368
Joined: Thu Sep 12, 2002 8:26 pm
Location: In the den
Contact:

Re: How would you diagram this English sentence?

Post by zompist »

merijn wrote:
zompist wrote:If you're so upset with suggestions of unfalsifiability, then maybe you shouldn't lead with that accusation when you hear about a new theory. Without knowing a thing about Tomasello's theory, you immediately assumed it was unfalsifiable.
I reacted against the idea that elegance is wrong. If you throw the idea of elegance, that is the use of as little ad-hoc mechanisms as is needed, out of the window, you throw falsifiability out of the window. If Tomasello says that all things equal, an explanation which argues that all different types of wh-questions, where the wh-constructions corresponding to different positions (what did you see, what do you think I saw, what are you most afraid of) correspond to different, unrelated constructions is to be preferred over an analyses in which there is one generalized construction (say (and I don't know how his framework formalizes it) [wh-word auxiliary [sentence (gap)]]), even though speakers first learn the specific construction and later generalize it to other cases, then his theory is hard to falsify as a theory of grammar. I don't think that linguists working in construction grammar, which is apparently the type of framework Tomasello is working in, choose the unfalsifiable route, I expect them to prefer the analysis with the generalized constructions to the analysis with the specific constructions. You however said that one analysis being more elegant (having less ad-hoc rules) should not be a reason to prefer it to less elegant analyses. That in my opinion is a very foolish opinion.
Data wins out over elegance. The hypothesis that there is an underlying movement transformation suggests a simple prediction: sentences with the base order should be learned first. This prediction is falsified by the evidence.
To state it in the terminology of construction grammar, most sentences do not correspond to a single construction, but consist of several of them all interacting. And you also have to take the meaning in account. How do you explain scope, (the difference between "every man kissed his wife" and "his wife was kissed by every man"), how do you explain the meaning of long distance wh-movement, where the wh-constituent doesn't correspond to it being an argument of the main clause. These are all questions that need to be answered, and that need certain operations that complicate the framework quite a bit.
For things like scope you need structure, not movement. And the reference to wh-movement begs the question!

For details on how Tomasello explains syntactic phenomena, you'll have to read him.

zompist
Boardlord
Boardlord
Posts: 3368
Joined: Thu Sep 12, 2002 8:26 pm
Location: In the den
Contact:

Re: How would you diagram this English sentence?

Post by zompist »

linguofreak wrote:
zompist wrote:There's a fallback position, of course, that a grammar doesn't have to model human linguistic processing— it just has to have tolerably similar outputs. That's fine; we often use approximate-but-wrong models, such as the idea that the surface of a mirror reflects light. I find transformations to be very useful in a grammatical description, for instance, while being agnostic about whether they are actually part of what the brain does.
Here's a question:

Is "what the brain does" even one thing? Is it possible that there are multiple, partially genetically determined ways that the brain analyzes language? (I don't quite believe that syntactic parameters are encoded in the genome, but there may be genetic influence on how the brain analyzes a given syntactic structure and reconstructs syntactic parameters from it).
That's an interesting question. I'd suggest that the US makes a good natural experiment: children here all learn English, but have ethnic backgrounds from all over the world. If there were multiple "language organs", wouldn't we see some effect in how certain children learn English? But so far as I know, no one has ever identified such an effect.

Some language disabilities are genetic, but IIRC they all turn out to be cognitive disabilities— that is, they affect the mind in various ways, not just language.

merijn
Lebom
Lebom
Posts: 207
Joined: Thu Dec 21, 2006 10:36 pm
Location: Utrecht Overvecht

Re: How would you diagram this English sentence?

Post by merijn »

zompist wrote:
The hypothesis that there is an underlying movement transformation suggests a simple prediction: sentences with the base order should be learned first.
Does it? I am aware that at some point of the history of generative linguistics this was an idea skipping around, but I don't think they still believe it, and I don't see it as a necessary consequence of the premise

For things like scope you need structure, not movement. And the reference to wh-movement begs the question!
I was talking about structure, or rather how every framework is complicated, with or without movement (although movement is sometimes needed in generative grammar to explain certain facts about scope). And I should have said "the meaning of constructions that generative grammar explains with long distance wh-movement".

EDIT: this will be my last post about this subject for now, I have a busy weekend and a deadline for Sunday.

User avatar
linguofreak
Lebom
Lebom
Posts: 123
Joined: Mon Jun 13, 2005 10:39 pm
Location: Somewhere
Contact:

Re: How would you diagram this English sentence?

Post by linguofreak »

zompist wrote:
linguofreak wrote:
zompist wrote:There's a fallback position, of course, that a grammar doesn't have to model human linguistic processing— it just has to have tolerably similar outputs. That's fine; we often use approximate-but-wrong models, such as the idea that the surface of a mirror reflects light. I find transformations to be very useful in a grammatical description, for instance, while being agnostic about whether they are actually part of what the brain does.
Here's a question:

Is "what the brain does" even one thing? Is it possible that there are multiple, partially genetically determined ways that the brain analyzes language? (I don't quite believe that syntactic parameters are encoded in the genome, but there may be genetic influence on how the brain analyzes a given syntactic structure and reconstructs syntactic parameters from it).
That's an interesting question. I'd suggest that the US makes a good natural experiment: children here all learn English, but have ethnic backgrounds from all over the world. If there were multiple "language organs", wouldn't we see some effect in how certain children learn English? But so far as I know, no one has ever identified such an effect.
I was more thinking along the lines of characteristics more evenly mixed within populations, e.g, "do people with good spatial reasoning and poor mathematical reasoning learn language differently than people with good mathematical reasoning and poor spatial reasoning?".
Some language disabilities are genetic, but IIRC they all turn out to be cognitive disabilities— that is, they affect the mind in various ways, not just language.
I'd frankly be surprised if a genetic factor that affected language (whether the net effect was a hindrance, improvement, or neutral) didn't affect cognitive abilities.

User avatar
jal
Sumerul
Sumerul
Posts: 2633
Joined: Tue Feb 06, 2007 12:03 am
Location: Netherlands
Contact:

Re: How would you diagram this English sentence?

Post by jal »

merijn wrote:It is primarily a model for making predictions about language, but it should lead to a model about the mind.
Not necessarily. One can have a perfect model, that is not a proper representation of the actual thing. It's emulation vs. simulation, basically.
I fail to see why the strive for having as little ad hoc explanations as possible is wrong for a scientific model of the mind.
It is not necessarily wrong, but it needn't be right either. Remember that the brain is shaped by evolution, and evolution comes up with a lot of ad hoc (partial) solutions, even if the overall solution doesn't seem to be.
Doesn't science work by starting with the simplest model and than making a model that is only as complicated as needed?
Yes, but here we have two totally different models: the one that represents an accurate model of the behaviour of language, and the one that represents the actual underlying brain. Those needn't match at all.


JAL

merijn
Lebom
Lebom
Posts: 207
Joined: Thu Dec 21, 2006 10:36 pm
Location: Utrecht Overvecht

Re: How would you diagram this English sentence?

Post by merijn »

jal wrote:
merijn wrote:It is primarily a model for making predictions about language, but it should lead to a model about the mind.
Not necessarily. One can have a perfect model, that is not a proper representation of the actual thing. It's emulation vs. simulation, basically.
You misunderstood me I think (because what I said was ambiguous). I didn't mean should as in "it is a necessary consequence" but should as in "it is ultimately the goal to make a model of the mind, even if it is now primarily a model of the mind, so psychological realism should be in the back of a Generative Linguist's mind".
I fail to see why the strive for having as little ad hoc explanations as possible is wrong for a scientific model of the mind.
It is not necessarily wrong, but it needn't be right either. Remember that the brain is shaped by evolution, and evolution comes up with a lot of ad hoc (partial) solutions, even if the overall solution doesn't seem to be.

It isn't about wrong or right, it is about falsifiability. A model with few ad hoc explanations is more falsifiable than a model with many ad hoc explanations.

Bedelato
Lebom
Lebom
Posts: 193
Joined: Sat Oct 30, 2010 1:13 pm
Location: Another place

Re: How would you diagram this English sentence?

Post by Bedelato »

Matt wrote:That we left should impress Phoebe.
A friend asked me how I would draw a syntax tree for this sentence and I'm stumped. Is that we left behaving as an NP here?
Yes, "that we left" is an NP, at least that's how I parse it: (That (we left)) ((should impress) Phoebe).
At, casteda dus des ometh coisen at tusta o diédem thum čisbugan. Ai, thiosa če sane búem mos sil, ne?
Also, I broke all your metal ropes and used them to feed the cheeseburgers. Yes, today just keeps getting better, doesn't it?

Post Reply