Skip links
Main content

Semantic representation

woensdag 26 december 2012 11:27

Attaching semantics to parsed sentence is quite a task, but the real challange is to choose the right type of semantic representation. A representation that can be used in all types of sentences.

John laughed nervously

A simple sentence consists of a noun and a verb:

John laughed.

This sentence describes an action.

The semantics of this sentence can be expressed like this:


This is a correct representation of the sentence, but it's not complete, because it doesn't express the fact that the event occurred in the past. Let's fix that:


This is better, but it's also worse. Since everything you would want to do with the laugh predications would need to take into account that there are multple representations of laugh

laugh(john), laughed(john), used_to_be_laughing(john), will_laugh(john)

And what if you want to say something about the way John laughed? Maybe he laughed nervously?


This obviously won't do. What's wrong with it? Well, the event of laughing is not expressed explicitly. Let's make that change [Alshawi-1992]:

∃e1 laugh(e1, john) ∧ past(e1) ∧ nervous(e1)

There is also another possible representation [Jurafsky and Martin-2000]:

∃e1 isa(e1, Laugh) ∧ laugher(e1, john) ∧ past(e1) ∧ nervous(e1)

For my own application I used a modification of the second representation that looks like this:

∃e1 isa(e1, Laugh) ∧ subject(e1, john) ∧ tense(e1, Past) ∧ isa(e1, Nervous)

I don't use laugh(e1, john) because it contains two pieces of information: laugh and john, and this makes semantic attachment more complicated. For the same reason I don't use laugher(e1, john). Using my representation, each semantic attachment contains only a single piece of information. This means that I will not need to access other nodes in the syntax tree to make an attachment. You'll notice that the predicates in my representation are all very generic.

It's very important to make events first-order objects in your representation. Even though it seems awkward at first, sooner or later you get into trouble if you don't. I end this part with the following sentence:

John laughed nervously when he heard the news.
∃e1,e2,o1 isa(e1, Laugh) ∧ subject(e1, john) ∧ tense(e1, Past) ∧ isa(e1, Nervous) 
∧ at(e1, e2) ∧ isa(e2, Hear) ∧ subject(e2, john)
∧ object(e2, o1) ∧ isa(o1, News) ∧ determiner(o1, The)

The sentence contains a relative clause that itself descibes an event (he heard the news). To link both events of laugh and heard, the events need to be represented as events explicitly.

How old is John?

Other sentences don't describe events, they describe states.

How old is John?

A first quess would be


But where so we leave how? What about this representation?

∃o1,m1 be(john, o1) ∧ old(o1) ∧ manner(o1, m1)

In this representation be is used a verb, just like laugh in the example above. We introduce the predicate manner to represent how.

The representation may be very tempting, because it allows you to use be like any other verb. It also allows you to use old as a noun.

There are two things about this representation. The first is that it will horrify both linguists and logicians, because you use a copula (is) like a main verb. But if that doesn't bother you, you may try and have a go at this sentence:

How old was John when he heard the news?

You run into deep trouble. The relative clause cannot be linked to the main clause.

In order to fix this, let's make the state a first-order object [Alshawi-1992]:

∃s1 old(s1, john)

and we can add how by adding manner:

∃s1,m1 old(s1, john) ∧ manner(s1, m1)

Now this sentence says: There is a state in which John is old and what we ask is the manner of this state. And the relative clause when he heard the news may be represented easily in the way we did it in the John laughed example.

States and events can be treated in the same way when linking predications. I myself just call them both events. My own representation is

∃e1,m1 isa(e1, Old) ∧ subject(e1, john) ∧ manner(e1, m1)


[Jurafsky and Martin-2000] Speech and Language Processing - Daniel Jurafsky and James H. Martin
[Alshawi-1992] The Core Language Engine - Hiyan Alshawi, ed.


« Terug