top of page

The workshop includes 3 main types of activities:

a) Invited talks with plenary discussion

b) Brief (15 minute) oral presentations

c) Panel: we will invite some of our key speakers to be part of a panel to discuss further (1) the implications of their talks (2) gaps/commonalities/potentials for research to expand in both fields (3) the possibility of seeking funding for concrete projects in the future. The AI side will be led by Matty Hoban, while the cognitive side will be led by the organisers. Both leaders will ask simultaneous questions, followed by a brief explanation, to the mixed discussants to begin a brainstorming session including audience contributions, to narrow down relevant points of focus and forms of interdisciplinary collaboration.

 

Programme

Friday 3rd May

9.00 Registration

10.30 Mark Bishop “Artificial stupidity? A provocation on the things neural networks can and cannot do”

11.30 Anne Schlottmann “Perceptual causality”

12.30 Lunch

13.30 Steven Sloman “Superficial causal reasoning among partisans”

14.30 Short talks

15.45 Refreshments

16.00 Dusko Pavlovic “Causality and complexity: do androids watch action movies?”

17.00 Woo-Kyoung Ahn “Effects of contexts and content in causal reasoning”

18.00 Bob Coecke “Connectedness and causality: from quantum physics to cognition”

 

Saturday 4th May 

9.00 Registration

10.00 David Lagnado “Causality in mind: learning, reasoning and blaming”

11.00 Jonathan Barrett  “Causal constraints on rational belief”

12.00 Short talks

13.15 Lunch

14.00 Teresa McCormack “Time and causal cognition in children”

15.00 Ognyan Oreshkov “On the link between the causal and memory arrows of time”

16.00 York Hagmayer “Choosing effective interventions: heuristic approaches and some empirical findings”

17.00 Refreshments

17.30 Panel led by Matty Hoban

Short talks

Friday 3rd may

14.30-14.40 Franco Formicola"K-FCA and partial property ascription in distributional semantics"

 

14.42-14.52 Aaron Sloman “How can a physical universe produce mathematical minds? And why are they so hard to replicate in current AI systems?”

 

14.54-15.04 Neil Bramley “Causal structure learning with continuous variables in continuous time”

 

15.06-15.16 Stephen Dewitt, N. Adler, N. Fenton, & David Lagnado “Categorical propensity updating: a novel form of confirmation bias”

 

15.18-15.28 Alice Liefgreen, Marko Tesic, & David Lagnado “Explaining away: probability interpretations and diagnostic reasoning”

 

15.30-15.40 Reuben Moreton “Improving decision making accuracy and observer well-being with human-machine teams: a case study of indecent image classification in policing

 

Saturday 4th May

12.00–12.10 David Kinney “Efficient information gathering for agents with a causal model of their environment 

 

12.12- 12.22 Agne Alijauskaite  “Causal reasoning as a tool for learning? Towards a conscious robot”

 

12.24-12.34 Katja Ried ‘Causal modelling in artificial agents using projective simulation.”

 

12.36-12.46 Nicole Cruz, Ulrike Hahn, Norman Fenton, and David Lagnado “Causal (in)dependence in common-effect structures"

 

12.48-12.58 Dyedra Morrissey & Robin Murphy “The role of control in human time perception”

Abstracts 

 

Steven Sloman "Superficial causal reasoning among partisans" 

Although it must be the case that causal reasoning is central to human thought, there is little reason to believe that it is probabilistically coherent. Causal knowledge is also surprisingly superficial. Outside narrow areas of expertise, most people are more ignorant than they think are. The reason for this illusion of understanding is that we live in a community of knowledge: We depend on others to do much of our thinking for us without knowing that we're doing so. This tendency is strong in the political domain because issues are complex and tribal alliances have been formed, so attitudes derive from partisan affiliation more than consequential reasoning. We also avoid the difficulty of causal reasoning by deploying protected values that simplify but polarize. I discuss implications for how to represent the knowledge that laypeople actually use to reason and make decisions.

Teresa McCormack “Time and causal cognition in children”

This talk will consider the relation between causal and temporal cognition in children. There is some evidence to suggest that children privilege temporal information when making causal inferences, but what is not known is the effect of children's causal beliefs on their temporal judgments. Studies with adults suggest that causal beliefs affect their judgments about the temporal order and duration of events, but developmental studies have suggested that such effects are not observed even in 10-year old children. I will describe some of our studies that demonstrate that children's temporal order and duration judgments are affected by their causal beliefs in exactly the same way as those of adults from as early 4 to 5 years. I will consider the implications of the findings for views of temporal and causal cognition.

Woo-Kyoung Ahn “Effects of contexts and content in causal reasoning”

Many models of causal induction are based on covariation information, which depicts whether presence or absence of an event co-occurs with presence or absence of another event. In this talk, I will examine three sets of such models (Rule-based models, Rescorla-Wagner model, Bayesian learning model) and present a series of studies potentially posing challenges to these causal learning models. First, I will show that order in which evidence is presented during causal learning affects final causal strength estimates, and how these findings cannot be explained by the models based on covariation information. Second, I will talk about effects of knowledge about causal mechanisms, and how they pose challenges to the way causal information is represented in Bayesian networks. Third, I will present recent studies on lay-beliefs about biological mechanisms for disorders, demonstrating genetic essentialism where genes are believed to be more necessary and sufficient than they actually are in determining our behaviors.

Mark Bishop “Artificial stupidity? A provocation on the things neural networks can and cannot do”

With the set of intellectual challenges where machines routinely surpass the best human behaviour growing ever more impressive (and now including GO - a game once viewed as presenting particularly difficult computational challenges), `Artificial Intelligence’ and ‘Neural Networks’ are rarely out of the news. However, one area where progress has remained relatively slow is the development of so-called ‘Artificial General Intelligence’, a system’s ability to computationally “perform any intellectual task that a human being can”. In part, this is because, if we are not to engineer bespoke solutions to every possible human endeavour, the system must be able to seamlessly deploy appropriate knowledge from one problem domain into another. But in this talk I will argue that neural networks are, a priori, incapable of learning and extrapolating such behaviour, and hence that the terms ‘A.I.’ and ‘A.G.I.’ are fundamentally misplaced; such systems are not so much ‘artificially [generally] intelligent’ as ‘artificially [generally] stupid’ ..

bottom of page