In recent months, Omer Preminger has been mounting a series of arguments against (too much) semantics in syntactic theorising. His most recent blog post can be found here.
I disagree with this and wrote a brief rebuttal in the comments section, repeated here:
I am completely on the other side in this vendetta against semantics, and I have been meaning to reply to you for a long time in fact. Here´s a quick first go, just to see the fur fly.
First of all, in other contexts I often do find myself arguing against formal semanticists and siding with the syntacticians in giving a primacy to the nature of (specific to linguistic systems) syntactic representations, and what evidence we have for their internal structure. This is because many semanticists, in my opinion, focus too much on the detailed exposition of truth conditions and less on how meaning is put together in practice in the symbolic system. Secondly, compositionality is not a holy grail in and of itself because once you allow yourself higher types, the `local compositionality´ constraint is vacuously satisfiable. It only gets bite if you combine it with a constrained view of polysemyand a theory of what sorts of types and shifters are allowed. This has to be part of the negotiation in which both psychological evidence and linearization are factors.
I think the circularity argument is false and it is based on a false premise. Essentially, you are presupposing your conclusion: that syntax has properties independent of the constraints imposed by the mapping to semantics on the one hand and the mapping to linearizable sound sequences on the other. (I personally think these properties might be vanishingly few). So on the other hand, if like me, you think that there is nothing to syntax (except for the fact of it), then the two pieces of information you need to be precise about are found in how detailed systematic meaning building is tracked by sound sequencing, in memorized vs. generated chunks. We are solving an equation in three unknowns, and it simply will not do to assume some aspect of your conclusion (i.e. syntax is autonomous and has lots of interesting things in it). You may not agree, but in any case the above position is an internally coherent one, and has not be so far debunked.
Imagine making the structurally same attack on phonology as you have made on semantics. It goes like this. Organizing your syntactic representations to be sensitive to the order in which phonemes/morphemes/words are pronounced (which we do when we tacitly assume the LCA) and THEN map to phonology and word order makes the latter trivial and circular. (In fact, I think something like this criticism is more pertinent than what you claim about semantics.)
In fact, I would argue that the only reason for hierarchical structure IS the systematicity and generativity of meaning. The only reason for linear order is that phonology has to deal with a serial representation. Syntactic representation is the trick you have to do to allow a symbol representation that will fulfil both sorts of design needs (adequate to the meaning generation part of language and to the necessity for a predictable and trackable externalization).
If you think syntax is essentially about hierarchical structure then it is so only because of the constraint of semanticity. I am curious to hear what specific evidence you think there exists for hierarchical structure in syntactic representations that does not make reference to semantic facts somewhere.
If you base your syntactic theory on string acceptability you have the kind of the theory that is equally applicable to birdsong.
I think that is a bad result. Recursion and hierarchical structure is not enough, in terms of what the crucial design feature of language is that makes it so unique and cognitively important, I would have to add semanticity (cf. Hockett). More precisely, the hybrid notion, recursive semanticity.