BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Date iCal//NONSGML kigkonsult.se iCalcreator 2.20.2//
METHOD:PUBLISH
X-WR-CALNAME;VALUE=TEXT:Eventi DIAG
BEGIN:VTIMEZONE
TZID:Europe/Paris
BEGIN:STANDARD
DTSTART:20151025T030000
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:20150329T020000
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
RDATE:20160327T020000
TZNAME:CEST
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:calendar.7127.field_data.0@oba.diag.uniroma1.it
DTSTAMP:20260407T202317Z
CREATED:20150909T104626Z
DESCRIPTION:In this talk I will present work undertaken at Leeds on buildin
 g models of activity from video and other sensors\, using both supervised 
 and unsupervised techniques. The representations exploit qualitative spati
 o-temporal relations to provide symbolic models at a relatively high level
  of abstraction. I will discuss techniques for handling noise in the video
  data and I will also show how objects can be 'functionally categorised' a
 ccording to their spatio-temporal behaviour. Finally I will present very r
 ecent results on learning and grounding language from  video-sentence pair
 s.
DTSTART;TZID=Europe/Paris:20150910T173000
DTEND;TZID=Europe/Paris:20150910T173000
LAST-MODIFIED:20150909T160517Z
LOCATION:Room A2
SUMMARY:Learning about activities\, spatial relations and spatial language 
 from video - Prof. Anthony Cohn\, University of Leeds
URL;TYPE=URI:http://oba.diag.uniroma1.it/node/7127
END:VEVENT
END:VCALENDAR
