Abstracts will be published here as they are received and processed. (If yours is not visible yet, please check back later.)

The final "Abstracts" pages from our booklet can be downloaded HERE.

John Bishop
Trusting Others, Trusting in God, Trusting the World

Is religious faith in God analogous to interpersonal trust? Is the analogy strong enough to show that faith in God can be reasonable and virtuous in much the same way that trust in another person can be reasonable and virtuous? I shall draw attention to significant disanalogies between trust in other people and trust in God, and argue that, though understanding the content of faith in God – and of its ‘secular correlate’, trust ‘in the world’ – may depend on this analogy, there are limitations to its successful use in a justificatory way.

Matt Boyd
A method for prioritizing intervention following root cause analysis

Root cause analysis (RCA) is used widely to investigate adverse events in healthcare and is mandated by many organizations and governments. RCA employs a combination of techniques to establish the factors contributing to the harmful outcome. Once the factors are identified then interventions are usually designed in order to prevent further harms occurring.
Prior to deciding which intervention(s) to implement, we must make judgments of causal importance in contexts of multiple, interacting conditions. Clearly we must take action to prevent adverse events, but we need not take action against every contributing cause.
In this paper I show that many factors can be identified by RCA, but current approaches do not adequately distinguish among these causes. I argue that this is because the literature on RCA (with few exceptions) has largely neglected the literature on philosophy of causation. I demonstrate that there are important lessons we can import from the causation literature, particularly from the philosophy of biology, and these tools have the potential to enhance the effectiveness of RCA and streamline intervention.

Stewart Braun
Australian Catholic University
Is Moderate Luck Egalitarianism an Oxymoron?

Luck egalitarianism faces the persistent criticism that it treats harshly those who are responsible for making poor or risky choices. In response, luck egalitarians such as Kok-Chor Tan and Shlomi Segall have developed moderate accounts of luck egalitarianism that respectively restrain the scope and strength of the responsibility principle on which luck egalitarianism relies. I argue that these moderate versions of luck egalitarianism fail to overcome a variant of the harshness objection in which the imprudent person is not reduced to dire straits, but from an egalitarian standpoint we still have reason to assist. Consequently, moderate luck egalitarianism faces a dilemma. Either it must accept harsh treatment of the imprudent, or it must severely weaken the responsibility principle that distinguishes it from other accounts. Given this dilemma, it can be concluded that moderate luck egalitarianism is oxymoronic since it cannot be successfully mitigated.

Campbell Brown
Maximalism and the Structure of Acts

Some acts may be done by doing others. You can drink whisky by drinking Lagavulin, for example. A maximal act is one that can be done in only one way. Maximalism is a moral principle that gives a privileged role to maximal acts: an act is permissible, on this principle, just in case it can be done by doing some permissible maximal act. The main motivation for this theory is to solve the so-called 'problem of act versions'. In this paper, I defend maximalism against a counterexample proposed by Krister Bykvist. I argue that maximalism is plausible so long as the acts available to an agent have a certain structure, namely, that of a complete Boolean algebra. In Bykvist's counterexample, the acts are not so structured. However, for this reason, the example appears incoherent. When redescribed so as to eliminate this incoherency, it ceases to be problematic for maximalism.

Nicolas Bullot
Book Session on The Artful Species

In this session commentaries on Stephen Davies' The Artful Species: Aesthetics, Art, and Evolution (Oxford University Press 2012) will be presented by Justine Kingsbury and Nicolas Bullot, followed by a reply from Stephen Davies.

Steven Burik
SIM University
Logos and Dao revisited

The notion of 'logos' has been a long-time favourite amongst comparative scholars looking for a term to translate 'dao'. It seems that enough scholars have looked into possible connections between the term 'logos' and the notion of 'dao'. After all, there are, at least on the surface, similarities between the two: both can mean ‘speaking’, 'discourse', 'language', and both refer to patterns in the world. I will argue first that in many instances, the approach of some of the scholars who have compared logos and dao has been one-sided and has mostly consisted in comparisons of these two key notions that have sought to portray both as denoting some kind of metaphysical principle underlying the processes that make up our world. However, when another perspective is employed, logos and dao might fruitfully be compared on a different level than most of these comparisons do. I will provide an alternative to this metaphysical approach to logos, and consequently to dao, using Heidegger's interpretations of logos through his rereading of Heraclitus. Instead of the usual metaphysical approaches there is a different way to compare the two notions that is non-metaphysical, in other words, an interpretation where logos and dao are not seen as principles that stand above and govern in some way the world as we know it. I will show that such an interpretation is much closer to both Heidegger and Daoism, and that consequently, comparative philosophy needs to beware of imposing metaphysical claims on culturally different ways of thought.

Wendy Carlton
Remembering One’s Past and the Problem of Appropriation

One of the most striking puzzles posed by the diachronic paradox of change and sameness in the life of a continuing self, is the problem of self-recognition and integration across time. Traits, beliefs, values, capacities and social circumstances change over time and not all past experiences accessed in autobiographical memory are readily incorporated into a functionally coherent self-narrative. What makes a memory identity-preserving? In this paper, I present and critique Marya Schechtman’s concept of “empathic access.” Schechtman sets out to describe the conditions necessary for an identity-preserving relation between a present and past person.
Three philosophical influences pervade her theory. They are: the various personal identity theorists against whom Schechtman attempts to position her theory, a strong reading of Harry Frankfurt’s concept of identification, and an adherence to Richard Wollheim’s account of centred and acentred memory. I argue that these influences have not lent appropriate conceptual frameworks to the task at hand and ultimately, Schechtman succumbs to an atomistic view of the psychological life of persons.
Locating problematic aspects of her account will provide signposts for a more coherent account of our past-directed reflection and appropriation of autobiographical memories. I conclude that a different framing of the relation between self and autobiographic memory is required. The language of identity needs to be replaced by continuity and any articulation of the relation between the self and the past needs to capture the agential aspects of autobiographical remembering. These are endowed by the narrative structure of our past-directed self-reflection.

Ünsal Çimen
Mathematics and Matter Theory in Bacon’s Natural Philosophy

Francis Bacon is usually seen as the father of modern science. So it is surprising to present-day philosophers of science that he assigned mathematics a subsidiary role in his scientific methodology. I argue that Bacon’s views on mathematics can be better understood in light of his theory of matter. The operative distinction was between active and passive matter. According to Bacon, most of the pre-Socratic philosophers believed in active matter. This belief, he argued, led them to fruitful investigation of nature. In contrast, he thought that those who believed in passive matter, such as Plato and Aristotle, had lost themselves in ‘discourse and disputation’. I show how this account of matter led Bacon to offer a new method, in which experiment had a primary role, and mathematics only had a secondary role. Therefore I conclude that the role of mathematics in Bacon’s experimental philosophy is not so surprising after all.

Hannah Clark-Younger
Imperatives and Entailment

As first pointed out by Jorgen Jorgensen (1937) and more recently discussed by Peter Vranas (2008, 2011, 2012), sentences in the imperative mood pose a problem for standard ways of defining entailment (as a relation between propositions), because they are not truth-apt. This paper follows the work of Brian Chellas (1971) and Josh Parsons (2013), who have proposed formal systems of imperatives that introduce an imperative modal operator. Chellas' theory is based on the modal logic KD45, but it makes imperatives truth-apt. Parsons' theory gives an interpretation of the modal logic KDDc4 where imperatives are not truth-apt, but we argue that it is not adequate; it sometimes predicts entailment where there is none. We correct Parsons' preposcription semantics and prove that the resulting logic is Chellas' KD45, but this time with an interpretation that does not make imperatives truth-apt.

Max Cresswell and Adriane Rini
Victoria and Massey
Tractarian Truthmaking

In recent years the view that truths need 'truthmakers' has become popular. Although not usually described in this way it is not hard to see Wittgenstein's Tractatus in these terms. The aim of the paper is to examine the Tractatus as looking at what it takes of a metaphysics to produce a theory of truthmaking. This is important to Wittgenstein's account of the nature of necessity, and its influence on Carnap and the logical positivists.

Anita Cubrinovska
Purging Aristotle's Poetics of tragic catharsis

Ever since Aristotle's Poetics emerged from oblivion during the Renaissance, it had an enormous influence on art, art critics and aesthetics. Its formal definition of tragedy profoundly influenced the modern conception of theatre and dramatic arts, but at the same time triggered endless debates with its obscure notion of tragic catharsis. Discussions on what Aristotle meant when he wrote that the tragedy should bring about "purgation of emotions of pity and fear" gained biblical dimensions without approaching any acceptable resolution. I will present a theory of the ancient Greek text, developed by the classicist Petrusevski that interprets the term "catharsis of emotions" as an unfortunate mistake of a copyist, and thus enables an elegant exposition of Aristotle's theory of tragedy without the notion of catharsis.

Garrett Cullity
Reasons and Fit

According to the “fitting response” tradition of thinking about value, good things are those that are fit for favourable responses and bad things those that are fit for unfavourable ones. We can also have reasons to make favourable or unfavourable responses. What, then, is the relation between reasons and fit? One answer is: identity. But that answer faces several problems, including the “wrong kind of reasons” problem. We can avoid those problems by distinguishing the fitness- and reasons-relations. But it seems unsatisfactory simply to treat these as two separate primitives. This paper explores the prospects for treating fitness-relations as more primitive than normative reasons, and explaining the latter in terms of the former.

Paul Daniels and Dana Goswick
Monash and Melbourne
Events & Mereology: the Life of the Party

Mereological Essentialism for ordinary objects is widely thought to be false. Most, that is, think that ordinary objects can at least survive minimal change of parts – the car survives the replacement of a tire, the tree survives the loss of leaves, and so on. The status of mereological essentialism with respect to events is far less clear. Do events have parts? If so, to what extent can an event survive the replacement, or loss of, some of its parts? We argue that the standard arguments against mereological essentialism for objects do not carry over to mereological essentialism for events. We further examine whether there are satisfying arguments against mereological essentialism which apply solely to events and conclude there are none. Ultimately we show that, far from being problematic, mereological essentialism for events is intuitive and explanatorily useful.

Stephen Davies
Book Session on The Artful Species

In this session commentaries on Stephen Davies' The Artful Species: Aesthetics, Art, and Evolution (Oxford University Press 2012) will be presented by Justine Kingsbury and Nicolas Bullot, followed by a reply from Stephen Davies.

Gregory Dawes
Perception and Reasons: A Defence of Psychologism

Following Karl Popper, a number of philosophers have rejected ‘psychologism’: the view that statements can be justified not only by other statements, but also by perceptual experiences. They argue that while perceptual experiences can cause an act of believing, they cannot be reasons for what is believed. But this overlooks the fact that perceptual experiences have content: they present the world in a particular way. The way in which they present the world is the reason for belief. It follows that if a perceptual experiences causes an act of believing, it is precisely by offering a reason for what is believed.

Lina Eriksson
Contractualism and aggregation: why The Bundle solution does not work

Scanlon's version of Contractualism forbids aggregation of different people's claims. But this gives rise to several types of problems, in particular with policies that benefit everyone ex ante but subjects individuals to some small risk of harm, so that ex post, not everybody will have benefited from the policy. One solution suggested in the literature is to think of the benefits of policies as bundles: you might not benefit from this particular policy, but you benefit, overall, from a bundle of policies, of which this particular one is a part. This paper discusses why this potential solution is problematic.

Marinus Ferreira
A Descriptive Theory of Virtue and Vice Terms

A common objection to the importance of the virtue and vice terms (v-terms) as an evaluative framework is that they are vague and culturally variable. Here I propose a descriptive framework for the content of v-terms which fixes their content for any society and shows why the extent that they vary between societies is limited and innocuous. My starting point is the 'natural role' for the virtues as variously articulated by Phillippa Foot, Rosalind Hursthouse, Julia Annas and others: the virtues are correctives for common human difficulties; the virtues are the traits an individual needs to flourish; and so on. These natural roles underdetermine what we should do, with the effect that there are multiple equally suitable but mutually exclusive schemes of virtues available that fulfil these roles. The variation between societies of these schemes comes from them implementing different candidate schemes, but the natural roles limit what could be an adequate scheme. These schemes have two domains of application: v-terms as applied to acts (v-acts) and as applied to character (v-traits). Using the technique introduced by David Lewis in 'Psychophysical and theoretical identification', we can link the two domains: a v-trait is the disposition of character that makes an individual spontaneously perform the respective v-act. I end with a discussion of how to use this framework for alternative theories of the roles of the virtues, like the motive-based accounts of Linda Zagzebski and Michael Slote, and how it may be extended to other issues in meta-ethics.

Bronwyn Finnigan
Knowing-how and the practical mode of presenting a proposition

Stanley and Williamson (2001) challenge Ryle’s view that knowing how can be defined in terms of knowing that. They argue that Ryle inadequately establishes his view and they provide an alternative account, according to which ascriptions of know-how are always ascriptions of propositional knowledge. A condition for the latter is that the relevant propositional knowledge be entertained under a ‘practical mode of presentation’. In this paper, I will critically engage this notion. Ryle’s proper target, I shall argue, concerns whether the exercise of know-how in intelligent actions can be sufficiently analysed in terms of knowing-that. Stanley and Williamson's notion of a ‘practical mode of presentation’ is, however, intended to denote the fact of a causal relation between the relevant propositional knowledge and the actions in which it is instantiated. While this may well provide an acceptable semantics of know-how ascriptions, I will challenge the idea that their assumed notion of propositional content can explain intelligent action. I will conclude by raising some general issues about the intersection of semantic theories of meaning with action theoretic philosophies of mind.

Benjamin Fraser and Kim Sterelny
Evolution and Moral Realism

Humans are moralising and moralised apes. This difference between us and our relatives has received much attention in the evolutionary literature. Evolutionary accounts of morality have often been recruited in support of error theory. In this paper, we have three main aims: (i) to locate evolutionary error theory within the broader framework of the relationship between folk conceptions (or folk theories) of a domain and our best scientific conception of that same domain; (ii) within that broader framework, arguing that error theory vs. vindication (or reduction vs. elimination) are two ends of a (probably complex) continuum, and that in the light of our best science many folk conceptual structures are neither hopelessly wrong, nor vindicated; (iii) while there is no full vindication, or seamless naturalistic reduction, of normative facts to obviously mundane natural facts, one important strand in the evolutionary history of moral thinking supports reductive naturalism. Moral facts are facts about cooperation, and the conditions and practices that support or undermine it. In making this case, we first respond to the error theoretic argument that moral facts are explanatorily redundant, then make a case that true moral beliefs are a "fuel for success"; they are a map by which we steer, flexibly, in a variety of social interactions. The vindication, though, is at most partial. Moral cognition is a complex mosaic, with a complex genealogy, and only one thread in that genealogy is selection for truth-tracking.

Michael Gilchrist
Amie Thomasson and Deflationary Metaphysics: Answering the Unanswerable

Since Carnap and Quine set out their opposing positions on the nature of ontological commitment in the 1950s, those seeking to defend Carnap’s deflationary approach have drawn heavily on the distinction he made between ‘internal’ and ‘external’ existence questions. Amie Thomasson continues this tradition, taking common sense or ordinary language discourse as the paradigm of a Carnapian linguistic framework. The ‘application conditions’ associated with the referring terms of this framework furnish criteria for the existence and identity of their referents, argues Thomasson, making existence questions asked within the framework easy to answer. External questions, on the other hand, use terms like ‘object’ or ‘thing’ without any associated criteria. That makes such questions ‘unanswerable’.
Thomasson’s defence of the existence of ordinary objects has received a sympathetic hearing in many quarters. Jonathan Schaffer is a case in point. He does however challenge her deflationary meta-ontology, claiming that it is in tension with her straightforward realist ontology. Far from deflating ontological debates, Thomasson answers the very questions she deems unanswerable, argues Schaffer. In this seminar I will give a brief account of Thomasson’s deflationary approach and consider Schaffer’s challenge to it. I argue that while Thomasson has responses she can make to some of Schaffer’s arguments there are further reasons to think that she does, at least partially, undermine her own meta-ontology.

Patrick Girard
Belief revision and the limit assumption: tension between static belief and belief dynamics

The limit assumption in doxastic logic basically says that plausibility orders are well-founded. If you only consider beliefs as being static, the assumption is philosophically implausible. However, when you do belief change, than it becomes crucial for a lot of doxastic operations. Without it, you can't be sure that revising a belief set returns a belief set. Which considerations are more important? Static or dynamic?

Rod Girle
Proof and Dialogue in Aristotle

Jan Łukasiewicz wrote his volume on Aristotle's logic in the mid-twentieth century. He was critical of the orthodox interpretation of Aristotle and drew a distinction between Aristotelian syllogistic and traditional syllogistic. Łukasiewicz proposes the radical notion that Aristotle's syllogism is an axiomatic proof system. This paper offers an alternative explanation for the difference between Aristotle's and the traditional syllogism based on formal dialogue.

Juan Manuel Gomez Paris
Introspection: the possibility of an experimental moral philosophy in the Scottish Enlightenment

During the seventeenth and eighteenth centuries a plurality of projects concerned with the knowledge of human nature appeared. Those carried out by British and French philosophers in particular are commonly grouped under the label 'science of man' and have as a salient feature the 'scientific' investigation of human nature and the faculties of the mind. Though this is a widely accepted claim, there is no detailed explanation of the way in which the 'science of man' projects count as scientific. I argue in this paper that it was the confidence in the process of introspection that allowed British moral philosophers to claim that they were constructing a science of morals. I show that even though there are a wide variety of projects within the British 'science of man' they are all unified by a commitment to the methodology of early modern experimental philosophy. This meant that moral philosophers accepted introspection or self-knowledge as a reliable epistemic process.

Dana Goswick
Realism and Independence

A distinction can be drawn between those who follow Michael Devitt in seeing Realism as primarily an ontological thesis and those who follow Michael Dummett in seeing Realism as primarily concerning the status of linguistic discourse. My objective in this paper is to clarify how Realism should be understood within the ontological (i.e. Devittian) camp. Realism is standardly understood as involving an existence clause and an independence clause. I present several counterexamples to the independence clause and argue that, far from being constitutive of Realism, independence is actually orthogonal to Realism.

Dana Goswick and Paul Daniels
Melbourne and Monash
Events & Mereology: the Life of the Party

Mereological Essentialism for ordinary objects is widely thought to be false. Most, that is, think that ordinary objects can at least survive minimal change of parts – the car survives the replacement of a tire, the tree survives the loss of leaves, and so on. The status of mereological essentialism with respect to events is far less clear. Do events have parts? If so, to what extent can an event survive the replacement, or loss of, some of its parts? We argue that the standard arguments against mereological essentialism for objects do not carry over to mereological essentialism for events. We further examine whether there are satisfying arguments against mereological essentialism which apply solely to events and conclude there are none. Ultimately we show that, far from being problematic, mereological essentialism for events is intuitive and explanatorily useful.

Preston Greene
Nanyang Technical University
Why we are Probably Not Living in a Computer Simulation

Nick Bostrom's simulation argument shows that if we believe that civilizations like ours tend to eventually run many simulations of their past history, then we should be nearly certain that we are currently living in such a simulation. Bostrom discusses two reasons why civilizations like ours might not tend to run simulations---neither of which is fully compelling---i) that they tend to become extinct before acquiring the required technology, and ii) that they tend to decide against simulation because they find it morally reprehensible or uninteresting. In this paper, I develop a more compelling reason to think that advanced civilizations tend not to run simulations: viz., that deciding to create simulations of the sort required by the simulation argument is irrational (on the basis of self-interest), and the inhabitants of advanced civilizations are likely to be rational. Thus, reflection on rational decision making shows us that we are probably not living in a computer simulation. Even so, I end by warning that newly-designed experimental research aimed at determining whether our universe is a simulation is more dangerous than has been realized, and the scientific community should consider discontinuing it.

Dominic Griffiths
Against Heidegger: on authenticity, emancipation, and work

Heidegger’s self-described project in Being and Time, the ‘question concerning the meaning of Being’, is an attempt to offer an account of the primordial ontological structure of human existence. Two central, intertwined factors that emerge from this account are the issues of authenticity and temporality, and how our orientation to the former shapes our relationship to the latter. What Heidegger neglects in Being and Time, I will argue, is the centrality of work in influencing our potential for an authentic existence and thus our relationship with time. Given that almost all the things that surround us have an equipmental value for which we are the reason, and that much of human existence is taken up with work, it is strange that this feature of human existence, so vital in defining who and what we are, and for producing the world, is given such little explicit attention by Heidegger. Two of his great students, Hannah Arendt and Herbert Marcuse, have, in different ways, engaged critically with this issue and offer rich accounts of labour and work, and their effects on the human condition. Towering behind them is, of course, Karl Marx who is explicit in arguing for the emancipation of existence through work. This paper will engage with these issues and these thinkers contra Heidegger.

Marco Grix
Cause, Complicity, Character: The Moral Responsibility of Consumers

The discussion of individual responsibility in the consumer ethics literature has been focussing on causal and complicity accounts. In Consuming choices: Ethics in a global consumer age, David Schwartz (2010) considers the Individual Difference Principle (a causal approach) and Christopher Kutz’s Complicity Principle. He finds the former wanting and proposes that, in the context of consumer responsibility considerations, we use the latter instead. I am not convinced by either of the two accounts. I will suggest that often the problem is not merely what we do as consumer, but rather what kind of consumer we are. Thus, I will put forward a character-centred account of moral responsibility. I will also sketch what kinds of consumer responsibilities that leaves us with.

Kelly Hamilton
Towards an account of collective emotions

The notion of collective emotions is often dismissed, as there is a strong intuition that emotions are the kind of phenomena that only individuals can experience. But in our everyday talk, we say things like, “we [sports fans] are outraged at the referee’s decision”. How can we explain these statements?
One approach is to say that emotions can be attributed to groups, insofar as that emotion would explain the behaviour of that group. This approach would be a functionalist rather than a phenomenological approach. Bryce Huebner, for example, argues that individual members’ behaviours are processes or representations that together form a collective representation. This collective representation is the collective emotion, and it explains the group’s behaviour.
A more interesting approach is to start from the group members’ reports that they – meaning the group – feel an emotion. When saying that, “we feel happy”, the individual is indicating that her emotion is something that she holds with other people. Margaret Gilbert explains this by arguing that individuals can form a joint commitment to hold an emotion together. Her account, however, regards emotions as cognitive rather than phenomenological. Hans Bernard Schmid offers a phenomenological account of shared subjectivity, suggesting that the “I” becomes a “we” when individuals regard themselves as being members of the group when feeling the emotion. It is this account I plan to develop further. I will argue for a plural-subject account of collective emotions, showing that an emotion can be experienced by many individuals together.

Richard Hamilton
Notre Dame, Australia
Shakespearean Ontogony

It is largely accepted that the metaphor of the genetic code is dead, outside some very specific roles e,g, Godfrey-Smith (2000). However, it is also widely accepted that biology cannot proceed without models and analogies. In this paper, I propose a new model. The developmental process is rather like the construction and performance of a Shakespeare play. Firstly, there is no definitive script, only reconstructions of the orginal perfomance notes, corrected by tradition. Secondly, each perfomance is a re-enactment, often in radically different circumstance and yet the play still remains recognisably the same play so Kurosawa's Ran is stil in some very real sense King Lear. The model and some of its limitations will be explored.

Hinne Hettema
Reduction Rehabilitated: on connecting Chemical and Physical Theories

I argue that there are good reasons to construct the connection between theories of chemistry and theories of physics in terms of a Nagelian reductionist model. As is well-known, the Nagelian model is based on a notion of connectibility and derivability. Current criticism of the notion of reduction in chemistry limits the scope of the paraphrase of connectibility and derivability to identities cum strict derivation. I will argue that this scope is too limited, both from the viewpoint of the theories that we need to connect, and from the viewpoint of Nagelian intent.
Under my reconstruction of reduction, reduction is a paraphrase of explanation, using a formal model of explanation and inter-theory connection.
In this paper I explore a structuralist characterisation of the logic of belief revision as the machinery providing the right sort of paraphrase to characterize the relation. The consequence of this reconstruction is that we may characterize the relation as one of Nagelian reduction, where the machinery of the reduced theory is indeed the consequence of the reducing theory augmented with the right sort of bridge principles. This also implies that a pluralist model of science is to a significant extent compatible with Nagelian reduction.

David Hunter
The virtue of seeking informed consent in research ethics: Or how I learned to stop worrying about the empirical evidence and to love informed consent.

Our present informed consent processes in research and their justifications have been significantly critiqued in the last ten years as, overly demanding, preventing valuable research, based on an inappropriately strong account of autonomy, and empirically unachievable. Supposing that we accept these criticisms this leaves us with a puzzle how should we alter our informed consent processes in light of this. In this paper I will argue that we should alter them less than one might think given a new argument for informed consent practices of the sort we currently use. This argument is based on the notion that present practices contribute to the moral development of researchers and developing the right sort of reactive attitudes to research participants. This is supported by a discussion of historic abuses in research, and the attitudes these abuses betray.

Rosalind Hursthouse
The Felt Demand of Rightness

The latest claim about what virtue ethics can’t do is that it can’t give an account of the ‘felt Demand of moral Rightness’. This paper (of course) argues that, insofar as ‘the felt demand of moral rightness’ is a coherent notion, virtue ethics can give an excellent account of it. The idea that it can’t, and the incoherent aspects of the notion, arise from a mistake identified right back in Anscombe’s ‘Modern Moral Philosophy’, namely that the terms ‘should’, ‘ought’ and ‘is right to’ are being equated, in moral contexts, with ‘is obliged’ or ‘is bound’. A look at the etymology of these terms reveals that, originally, ‘should’, ’ought’, ’owe’, ’oblige/obligation’, and ’duty’ were all related to being bound or constrained (and hence, in moral contexts, perfectly suited to being grounded in divine law) but that ‘right’ was always the odd man out and related to truth and correctness. This suggests that a coherent notion of ‘the felt demand’ may simply be that it is the felt demand of reason. This may be experienced in a variety of contexts, but, in moral contexts, it is only the virtuous agent who can be relied on to feel it when appropriate.

Katrina Hutchison
Free Will and the Moral Community

The free will debate has traditionally been a debate about the compatibility of free will and determinism. Recently, however, some philosophers have shifted the focus to moral responsibility rather than free will. They argue that moral responsibility is compatible with determinism, even if free will is not. The term ‘semi-compatibilism’ is sometimes used to refer to this position. Semi-compatibilists usually remain mute about residual metaphysical questions about the possibility of free will, but their side-stepping manoeuvre might nevertheless be regarded as a concession to incompatibilism. It apparently concedes that while certain problems in ethics or moral psychology can be resolved whether or not determinism is true, the hard metaphysical problem of free will remains. In this paper I argue that while the notion of semi-compatibilism is useful for bracketing off certain kinds of questions while developing a nuanced account of moral responsibility, the apparent concession to incompatibilism is unsatisfactory. In my view, “semi-compatibilist” insights – such as P. F. Strawson’s insight about the role of moral community in moral responsibility attributions – can inform a fully-fledged compatibilist response to the metaphysical problem.

Liz Irvine
Skill learning, play, and the evolution of language

Two of the main hypotheses about how language evolved concern the role of technological advances in human culture, and the nature of cultural transmission. Combined, these lead to the claim is that the basic representational ability that underpins language stems in part from the cultural transmission of complex technical skills, such as manufacturing hand axes (e.g. Sterelny 2012). Here, explicit teaching of complex skills requires representing action sequences (mental templates) and breaking action sequences down into parts, which could contribute to the generation of (eventually) structured symbolic linguistic representations. However, while work in child development often assumes that explicit teaching is common among humans, it is in fact fairly uncommon among contemporary hunter-gatherer societies. Here, other types of social learning take the lead, including play. It will be argued that the inclusion of different types of play into an account of cultural transmission of physical and social skills lends more plausibility to such accounts of language evolution, and provides a further powerful way of scaffolding the evolution of representational capacities. This suggests a co-evolutionary hypothesis concerning physical and social skills, play, and language, that track an ever more complex set of representational capacities.

Jeremy Johnson
Motivational Internalism and Aliefs

In this talk I look at Tamar Szabo Gendler’s distinction between belief and alief, and Uriah Kriegel’s attempt to use this distinction to argue for a hybrid internalist-externalist position in the debate over the relationship between moral judgment and motivation. It will be argued that the distinction somewhat oversimplifies the complexities of psychology in relation to the issue of moral motivation, and fails to settle the debate in the externalist’s favour.

Jon Keyzer
Meaning and Intending

In “Why Meaning Intentions are Degenerate” (2012), Bilgrami argues that meaning intentions fail to be normative in any interesting sense. I situate this argument with respect to the skeptical challenge posed by Kripke’s Wittgenstein (1982) and clarify the notion of an intentional state with a focus on Bilgrami’s “Intentionality and Norms” (2004). It is perplexing that Bilgrami (2004) defines intentional states as normative states, yet does not grant such normativity to meaning intentions in Bilgrami (2012). I propose that meaning intentions might still be normative on Bilgrami’s terms and consider the consequences for meaning in general.

Anton Killin
Musicality in Human Evolution

A diverse research cluster has emerged, comprising theorists from numerous disciplines including philosophy, anthropology, the cognitive sciences, evolutionary biology, evolutionary psychology, and the arts, that focus on understanding aspects of human cumulative culture through the lens of evolutionary theory. This research cluster provides an excellent opportunity to reflect on the evolutionary nature of music. This paper distinguishes musicality from music, and discusses recent attempts to account for their evolution. I argue that dynamic, complex co-evolution undermines attempts to cash out musicality using the standard set of distinctions (adaptation, by-product, etc).

Jonathan King
European Graduate School
Against Meillassoux: materialism and the concept of chance

The work of French philosopher Quentin Meillassoux has achieved a degree of notoriety in certain English speaking academic circles recently. His work has been integral to the emergence of a number of philosophical trends, most notably 'speculative realism' and 'object oriented ontology'. In this paper I will address Meillassoux's deployment of the concept of chance in After Finitude and The Number and the Siren, and show, through readings of Hegel and Alain Badiou, how he misapprehends the concept. Against Meillassoux I will argue for a materialist conception of chance which renders inoperative his notion of 'absolute contingency' and calls into question the ground upon which the aforementioned trends are built.

Justine Kingsbury
Can People Think Critically About Their Deeply Held Beliefs?

Ideally, one might think, people should subject their own beliefs to impartial scrutiny, weighing up the evidence for and against them and modifying them as the evidence demands. However, even those who are good at critical thinking in general tend to fall short of this ideal when it comes to examining their own deeply held beliefs. (What these beliefs are will of course vary between individuals, but religious beliefs and some moral beliefs are candidates.) This paper considers what it is for a belief to be deeply held, and what the obstacles are to critically evaluating such beliefs.

Justine Kingsbury
Book Session on The Artful Species

In this session commentaries on Stephen Davies' The Artful Species: Aesthetics, Art, and Evolution (Oxford University Press 2012) will be presented by Justine Kingsbury and Nicolas Bullot, followed by a reply from Stephen Davies.

Naoaki Kitamura
Keio University
Resolving the Paradoxes of Grounding

A non-causal, constitutive relation of determination called grounding has become a major concern in studies of metaphysics. Much work has been undertaken on the nature of grounding and its theoretical utility. Recently, K. Fine has claimed that a common logical principle of grounding (together with standard logical assumptions) is inconsistent with some other apparently plausible principles of grounding. This paper explores how these paradoxes can be resolved by examining a set of ground-theoretical principles that Fine does not call into question. The basis of my argument is a deflationary account of these ground-theoretical principles: I argue that principles involving factive constructions, such as “The proposition that p is true” and “There exists the fact that p,” express merely conceptual explanations, and are hence metaphysically non-substantive. From this, I argue that the paradoxes in question arise as a result of confusing two different kinds of non-causal explanations, that is, properly metaphysical and merely conceptual explanations.
Focusing on a particular simple form of paradox, I will begin by criticizing some existing and possible responses to it (including Fine’s), and show why I think that the argument of the paradox is problematic. I will then show how this particular case can generalize to the other paradoxes that have been presented. This discussion suggests how all forms of the Finean paradox can be resolved in a uniform way, without abandoning any logical assumptions, on the basis of a well-motivated and illuminating distinction between two different types of non-causal explanations.

Fred Kroon
Kripke's "Reference and Existence" 40 years On

Kripke gave the John Locke lectures at Oxford between October 30 and December 4 of 1973. 40 years on, the six lectures have been published by OUP as *Reference and Existence*. The lectures defended i) a pretense account of the language used by authors of fiction and by consumers of fiction, ii) the view that it is merely pretended that fictional sentences express propositions, and that a term like 'unicorn' is a merely pretended name of a species, iii) an ontology of "fictional and mythical characters conceived of as abstract objects whose existence depends on the existence or non-existence of various fictional or mythological works," iv) a distinction between two different ways of predicating properties of such objects, and v) a thesis about the analysis of negative existentials. The OUP edition of the lectures has a number of changes from the original typescript of the John Locke lectures, but all that Kripke says about the changes is that he "replaced passages that could use clarification, [and] compressed some that now seem too long (or difficult to comprehend". This paper is about one such change.

Stephan Leuenberger
Total Logic

A typical first stab at explicating the thesis of physicalism is this: physicalism is true iff every fact about the world is entailed by the conjunction of physical facts. The same holds, mutatis mutandis, for other hypotheses about the fundamental nature of our world. But it has been recognized that this would leave such hypothese without the fighting chance that they deserve: certain negative truths, like the truth (if it is one) that there are no angels, are not entailed by the physical facts, but nonetheless do not threaten physicalism. A plausible remedy that has been suggested by Jackson and Chalmers is that physicalism boils down to the thesis that every truth is entailed by the conjunction of the physical facts prefixed by a "that's it" or "totality"' operator. To evaluate this suggestion, we need to know what that operator means, and - since the truth of physicalism hinges on what is entailed by a totality claim - what its logic is. That is, we need to understand the logic of totality, or total logic. In this paper, I add a totality operator to the language of propositional logic, and present a model theory for it, building on a suggestion by Chalmers and Jackson. I then investigate a number of different systems. 

Andres Luco
Nanyang Technological University
Thrasymachus's Challenge: Morality or False Consciousness?

In Book I of the Republic, Thrasymachus argues that morality is nothing more than an ideology designed to benefit those in power at the expense of everyone else. Justice, he insists, is nothing but the interest of the "strong." In Republic I, Thrasymachus is quickly dispatched by Socrates. However, he was probably not the ablest spokesman for his ideas. For the challenge Thrasymachus put forward is not so easily dispelled. In the following, I argue that functionalist naturalism – a prominent variant of moral naturalism – is particularly vulnerable to Thrasymachus’s challenge. At least some social arrangements, including some moral norms, appear to be products of false consciousness. False consciousness occurs when a dominant elite shapes, deliberately or non-deliberately, the beliefs and desires of a subordinate group in such a way that the subordinates act for the benefit of the elite, but against their own interests (Lukes [1974] 2005: 27-28, 42-44). It remains controversial whether false consciousness of this sort operates very often, and even whether it exists at all (cf. Elster 1983; Scott 1990). I shall take the possibility seriously, however, and assume for the sake of argument that norms do not infrequently result from false consciousness. Taking this assumption for granted, I argue that even those norms sustained by false consciousness have the function of impartially promoting the interests of all persons affected by them.

Andres Luco
Nanyang Technological University
The Function of Morality: Norms, Publicity, and the Mutual Advantage

Norms are shared rules, generally followed within a society, which classify behaviors as permissible, impermissible, praiseworthy, and blameworthy. They can be observed as enduring patterns of social roles, relationships, and behavior. Some philosophers, such as Philip Kitcher, have suggested that there is no meaningful difference between norms and moral standards. Other philosophers, such as David Copp and Paul W. Taylor, suggest that moral standards are a special subset of norms, adherence to which involves a desire that the moral standard be adopted by everyone. In other words, moral standards have the feature of publicity. I shall argue in favor of the view that moral standards are a special subset of norms. However, I will suggest that the difference between moral standards and other norms cannot ultimately be explained in terms of publicity. Rather, the essential difference between moral standards and other norms is a difference in function. Non-moral norms merely have the function of homogenizing behavior among members of a social group, whatever that behavior may be. Moral standards, by contrast, have the function of motivating behaviors that promote human interests in a way that accords impartial consideration to the interests of each member of society.

Kate Lynch
A (Re)Revised Model of Phenotypic Variance

Quantitative geneticists traditionally attribute phenotypic variation (VP) to genetic (VG) or environmental sources (VE). However work in both the sciences and the philosophy of biology has highlighted conceptual limitations to this partitioning, namely gene-environment interactions (GxE) and gene-environment covariances (2CovGE). Because of this, additional variables are often added to quantitative genetic models (VP = VG + VE + GxE + 2CovGE).
New work on epigenetics has demonstrated causal influences additional to genes and environment. Epigenetics is a new research area in biology, whereby gene expression is regulated by inherited mechanisms such as the methylation of cytosine bases and histone modifications. This leads to the potential for a third variable contributing to phenotypic variance: epigenetic variance. Not only is this third source of variance (VEp) currently overlooked in quantitative genetic models, but the causal impact from the interactions and covariances between these three variables has been largely unexplored. I propose a re-revised model of partitioning phenotypic variance, accounting for all three variables and their causal interactions.

James Maclaurin
The Experienced Watchmaker

A common objection to Darwinian Cultural Evolution rests on an apparent difference between biology and culture – biological variation is the result of random change whereas cultural variation is the product of human foresight. However, as Alex Mesoudi has recently pointed out, while many are keen to assert that culture is not a “Blind Watchmaker”, it is not quite clear how that is to be interpreted as an argument against Darwinian Cultural Evolution. I review Mesoudi's solutions to the problem and propose an alternative solution based on the role that foresight plays in human cognition.

Anna Malavasi
Michigan State
Epistemic injustice in global development

The fact that there are millions of children, women and men who do not have access to clean running water, a safe and secure supply of food, electricity, education and the fulfillment of other human rights is a moral failing on the part of individuals and institutions. Global development is comprised of a system of ideas, policies, institutions and individuals which at first glance is supposedly concerned with the amelioration of the living conditions of those populations living in disadvantaged conditions, however this, in fact is not necessarily true. A fundamental problem in development is the suppression of knowledge or what Boaventura de Sousa Santos calls, “a form of epistemicide.” Another dimension to the foundational problem often missed or blatantly ignored, which has occurred due to the way the global north has imposed a capitalist and imperial order on the global south is epistemological. Therefore, I argue that understanding epistemic injustice in development will help to respond to severe global problems such as poverty and hunger in a different way. In this paper, I argue that epistemic justice is crucial for rethinking the theory and practice of global development. Drawing from my own experience as a development professional, I reflect upon the meaning of development in the practice and discuss some of the limitations of development. Based on the work of Miranda Fricker and others I offer an understanding of the way development is epistemically unjust. 

Andrea Marcelli
La Trobe
Topic-specific logic and the inductivists’ Trojan horse

This paper aims to outline a compatibilist view on the debate between inductivists and deductivists by assuming the strict definition of the concept of (logical) validity proposed by Casari.
According to such definition, it is argued that arguments are valid in a formal system whenever their structure complies with logical forms determined by the rules of the system. However, this top-down view does not provide reasons for preferring one formal system rather than another. On the one hand, inductivists are faced with the dilemma of either providing a good rationale for the rejection of validity or abiding by the criterion of validity which applies to deductive arguments. In the latter scenario, deductivists are faced with the trouble of fighting inductivists within the walls of their very citadel.
To try to resolve the debate, the case of topic-specific logic is examined. Topic-specific logic was introduced by Musgrave as an attempt to dismiss the use of inductive arguments in favour of deduction. Since topic-specific logic was constructed starting from the same assumptions held by deductivists, reasons for its dismissal appear to be pragmatic rather than formal: e.g. differently from other views, such as Bigelow’s, deductivists claim that enthymemic reconstruction should always be the preferred course of action.
Although it empathizes with the deductivists, this paper tries to justify the convenience of using topic-specific logic (and the inductive reasoning it accounts for) for contextually justified purposes. For example, the case raised by Fox in regard to Newton’s deductions provides grounds for this conclusion.

Carolyn Mason
Practical Rationality and Reasons for Action

What is the relationship between reasons for action and practical rationality? It seems likely that there is such a relationship. For example, acting rationally might be taken to involve:
(1) acting as you have normative reason to act;
(2) acting as you believe you have reason to act; or
(3) acting as you are justified in believing you have reason to act.
However, various philosophers, including Michael Bratman, John Broome, Niko Kolodny and Ralph Wedgwood, have given arguments against these theories. I offer a competing analysis of rationality that explains rationality in terms of what I call ‘justifiable reasons’ and show that my analysis of rationality avoids objections to (1), (2), and (3). 

Jane McDonnell
Ineliminable Numbers in Physics

Hartry Field has claimed that mathematics is dispensable in physics… it is useful for evaluating the consequences of our theories but, with a little effort, the same conclusions could be derived from a purely nominalistic theory. He argues that mathematics cannot play an essential role in intrinsic explanations of physical phenomena because it is non-causal. This is one manifestation of the metaphysical problem of the mathematical versus physical divide: mathematical objects, if they exist at all, are considered to be independent of and outside spacetime whilst physical objects operate causally in a physical realm. I argue that, contra nominalism, the fundamental concepts of physics are physico-mathematical concepts (i.e. their physical and mathematical content cannot be separated) and that mathematics has an indispensable role to play in the discovery of new physics. Furthermore, our best physical theories contain certain ineliminable numbers which play a crucial, albeit non-causal, role in explanations. I argue that physicists have developed their own methods in their search for intrinsic explanations and that this has led many of the best of them to a Pythagorean view of the structure of the world. The question then arises of how physical structure emerges from mathematical structure. Some approaches to this problem are discussed. It is concluded that the issue of nominalisation, per se, is largely unimportant. The important thing is that the concepts of physics and mathematics be brought together to bridge the mathematical/physical divide.

Patrick McGivern
Emergence and Unprogrammed Behavior

Various accounts of emergence and self-organization involve the idea that emergent behavior is in some sense ‘unprogrammed’. However, these accounts rarely say exactly what this means and what it would take for a pattern of behavior to be ‘programmed’ in the first place. In this paper, I try to develop the connection between programming and emergence. Focusing on some standard examples of self-organized behavior, I examine various accounts of programming and see what concepts of emergence they suggest. I then compare these accounts of emergence with accounts that are not based on programming to see which are best able to address some standard problems associated emergence. In particular, I focus on underlying problem of why we should care about emergence to begin with.

Jonathan McKeown-Green
Found in translation: teaching logic and doing metaphysics by sponging off relations

I wrote some exercises to remind rusty upper-level students how to translate from English into the language of first-order logic with identity. I assigned a predicate constant to the PARENT OF relation and invited the players to define other blood relationships in terms of it, so that they can perspicuously translate sentences like: "Amy is Beatrice's second cousin three times removed" and disambiguate "Celia and Dean have no grandchildren." Add two additional primitive relations and soon you have sisters-in-law. This task can motivate topics in logic and its philosophy. It neatly introduces recursive definitions, without which you cannot invite all your ancestors round for dinner. It inspires variants of first-order logic with identity (rather like naive set theory or classical mereology) that raise metaphysical questions when you try to specify proper axioms.
Can one be one's own parent? Need siblings share exactly two parents? The temptation to leave these metaphysical matters unresolved by the logic parallels the way that Kripke semantics leave open questions about the laws governing modality by formulating alternative sets of constraints on accessability.
Hence, my humble problem set might furnish useful illustrations for students who are confused about the relationship between the logician's and the metaphysician's appeals to possible worlds.
Setting aside pedagogical applications, puzzling questions about how, if at all, to distinguish the logical, the conceptual and the broadly metaphysical arise naturally when you play happy families. After talking about teaching, I will hijack this framework in order to argue (swiftly) against the Linguistic Turn, which is still prevalent in metaethics and ontology. And we haven't even touched on what constitutes marriage - which is left as an exercise

Richard Menary
Cognitive Integration and Cognitive Niche Construction

In his 2010 Sterelny argues that the extended mind is a limit case of culturally scaffolded minds. EM cases, whilst not impossible, are likely to be rare and not a matter of biological adaptation. EM cases, such as Otto and his notebook, are too individually entrenched, whereas cases of niche construction in biology are spread out across populations. In this presentation, I argue that Sterelny's niche construction, or scaffolded approach, is compatible with a variant of EM - Cognitive Integration. Unlike the version of EM that Sterelny criticises, CI focusses on systems of representation and their manipulation (cognitive practices), which can be shared by populations. Consequently CI is much closer to cognitive niche construction than EM that is based around entrenched individual tool use. I go on to argue that CI better explains the transformative function of permanent cognitive scaffolds than does Sterelny's non-constitutive account of cognitive niche construction.

Alex Miller
Wittgenstein and Quine on Conventionalism about Logic

In his "Truth By Convention" (1936) Quine developed a famous regress argument against the logical positivist idea that the conventions allegedly governing individual logical truths might flow from more general conventions that we have adopted. In this paper, I will argue that we can block Quine's regress argument by invoking the main point in §201 of Wittgenstein's Philosophical Investigations, that there must be a way of following a rule that is not based upon an interpretation. I will consider whether this gives the later Wittgenstein the resources to defend conventionalism as embraced by logical positivism, and of thereby avoiding the need to embrace the "full-blooded conventionalism" attributed to him by Dummett in his paper "Wittgenstein's Philosophy of Mathematics" (1959) (according to which a new convention is required for each individual logical truth).

Andrew Moore
Pleasure, Intentionality, Phenomenology

Abstract. This paper examines the nature of pleasure. It: (a) locates the project in debates in philosophy of mind, philosophical ethics, and philosophical method, (b) makes the intentionalist conjecture that all pleasure has intentionality (‘aboutness’ or ‘directedness’), (c) identifies both modest and strong forms of this intentionalism, (d) defends strong intentionalism about pleasure and displeasure, (e) identifies implications in philosophy of mind, philosophical ethics, and philosophical method, and (e) begins to examine how this view can best account for pleasure’s phenomenology (its ‘feel’ or ‘felt character’).

Nick Munn
'Denounce, Ridicule & Harass': How to Engage with Unreasonable Positions

Some commentators (notably Stanley Fish) take it that the appropriate response to Holocaust Denialism is to "denounce, ridicule and harass" (DRH) those engaged in Holocaust Denial, rather than engaging with them as though they were intending to enter into a rational discussion of the position on its merits. In this paper, my goals are twofold. First, I analyse the features of Holocaust Denialism which, if present, are taken to warrant this drastic shift in appropriate response to the claims. Secondly, I examine other areas where these features are (at least arguably) present, and attempt to determine whether it would be appropriate to adopt the DRH model to claims other than Holocaust Denial. In particular, I focus on Climate Change Denial, which has already been analogised to Holocaust Denial, and on the denial of rights to members of the LGBT community.

Tim Oakley
La Trobe University
There are no good arguments against duties to oneself

There are two main sources of opposition to the idea that people have duties to themselves. The first is the argument that a duty to oneself would be a duty from which one could release oneself, which is an absurdity. The second is the argument from individual liberty, or sovereignty over self, which takes us to have moral freedom to direct what we do and what we allow to happen to us, — to do as we will with ourselves — constrained at most by duties to others. In this paper I argue that these considerations fail to dispose of duties to oneself, and in addition, we find ourselves in possession of a persuasive new positive argument in favour of duties to oneself. Further, we find ourselves able to explain away what otherwise seems a damaging objection to two of the standard arguments in favour of duties to oneself. (The arguments in the paper are almost entirely independent of any specifically Kantian contentions in ethical theory.)

Graham Oddie
Colorado, Boulder
The statue and the clay: beyond monism and dualism

A sculptor makes a statue of David out of a lump of potter’s clay. He finishes the statue at noon, but after lunch he destroys it by squashing the clay. Throughout lunch the statue and the lump occupy the same space, have the same mass and contain the same material parts. It seems obvious that the lump and the statue are one and the same thing. This is monism. But there are features that the lump and the statue don’t share. Before noon the statue has the feature of being incomplete, whereas the lump of clay does not. During lunch, while the clay is still moist, the statue is vulnerable . It could disappear by collapsing back into a shapeless lump. The lump is not vulnerable. Given Leibniz’s Principle, it follows that the lump and the statue are distinct things. This is dualism. Monists can explain the extraordinary number of properties which the lump and the statue share, but have a hard job explaining their differences. Dualists can accommodating the differences but have a hard job explaining how distinct things can share that range of properties. Monism and dualism share a presupposition, one which is almost universally endorsed, but which is, I argue, false. Once free of this presupposition we can entertain a third theory which captures the best features of both monism and pluralism, and avoids all their shortcomings.

Tristram Oliver-Skuse
Prinz and the Huck Finn test

Philosophers often claim that emotions are representational partly in order to help explain cases where it looks like an emotions play justificatory roles. Cognitivists have an easy time making sense of this, since they claim emotions are, or constitutively involve, judgments and so are representational and can have justificatory roles, but they struggle to account for recalcitrant emotions which conflict with explicit judgement. Feelings theorists have no problem accounting for the conflict, but have a harder time accounting for the justificatory roles. Perceptual views are meant to be the best of both worlds. Prinz offers the most direct account of how to make a perceptualist account of emotional representation work. But, I will show that his position fails to make room for emotions' justificatory roles, and so is more like a traditional feelings theory than it looks. This falls out of his Dretske-inspired account of representation, which ties representation too closely with detection and does not easily extend to account for the attributive nature of representation. As a result, on this picture, fear represents danger rather than representing the thing I am afraid of as dangerous and so does not give us the right kinds of representations to figure in justifications.

Maureen O’Malley
Major problems in evolutionary transitions: a metabolism-based alternative

The model of major evolutionary transitions devised by John Maynard Smith and Eors Szathmáry has exerted a lot of influence over philosophers of biology. Although it is well recognized that the events in this model are disunified, and combine different types of events under an inconsistent account of ‘information’, nevertheless the model’s depiction of some major causal forces producing an increase in hierarchical complexity continues to have considerable philosophical appeal. I will add some different considerations to these criticisms – particularly to do with the prokaryote-eukaryote transition, the acquisitions of plastids, and the biological oxygenation of the planet – and suggest that the model cannot be preserved. Instead, a metabolism-based model of transitions is the only viable alternative. It provides a necessary basis for any compilation of more idiosyncratic events. Such a model, I suggest, will help overcome an entrenched but unwarranted preference in philosophy of biology for genetics over biochemistry, and morphology over metabolism.

Graham Oppy
Logical Arguments from Evil and Free Will Defences

It is widely believed that Plantinga's free will defence has demolished logical arguments from evil. This widely held belief is mistaken. There are extant logical arguments from evil that are untouched by free will defences. And there are doubtless logical arguments from evil that have not yet been considered, concerning which no one is yet in any position to pass judgment.

Ross Pain
La Trobe
Wilfrid Sellars on Perception

It has been notoriously difficult to combine the core insights that motivate naïve realism and those that underpin representationalism into a coherent theory of perception. Naïve realists, for example, argue that representational states cannot adequately account for the phenomenological directness of perceptual experience, while representationalists maintain that indirect theories best explain perceptual anomalies such as illusions and hallucinations. These disagreements highlight the fact that the core commitments of direct and indirect theories of perception are fundamentally at odds. Critcal realism is a version of the causal theory of perception most commonly associated with Wilfrid Sellars, and more recently expounded by Paul Coates (2009). I argue that that although critical realism necessarily rejects any form of direct perception, it nonetheless has the conceptual resources to account for many of the concerns that motivate naïve realists. I look at the Sellarsian background to the theory, and show how it is able to provide a “synoptic” solution to the problem of perception. 

Francesco Paradiso
Différance and Syncopation

In this paper I shall explore the analogies between the jazz rhythmic figure of syncopation and Derrida's concept of différance. It will be argued that there is a correspondence between the temporal and spatial properties of syncopation and the temporal and spatial attributes of différance. In the movement of signification that différance generates, the deferring or temporization, and the differing or spacing that take place, rely on the displacement of the present/presence of the sign. From a temporal perspective, the present in order to produce signification undergoes a division, becoming a non-simple synthesis of deferred traces of retention (delay) and protention (anticipation), whereas from a spatial perspective, the differentiation between the presence of the signifier and the absence of the signified produces asymmetry. Similarly, in syncopation the shift of the rhythmic accent from the strong to the weak beat entails significant temporal and spatial consequences. From a temporal perspective, the regular cadence of the rhythmic flow becomes displaced by delay and anticipation, whereas from a spatial perspective, the interval that separates the beats loses its symmetric character and becomes asymmetric. Finally, my analysis will look briefly at the particular case of a jazz musician, Avishai Cohen, whose constant use of complex syncopated rhythmic structures generates a unique creative context.

Margaux Portron
Paris 8 University
Reading Michel Foucault in 2013

This paper has the ambition to look at Michel Foucault’s Discipline and Punish and Society must be defended, and particularly his concepts of biopolitics and biopower. The thesis we will defend is that his theories are relevant now more than ever.
Biopolitics correspond to the state apparatus put in place to exert biopower: "a new technology of power... [that] exists at a different level, on a different scale, and [that] has a different bearing area, and makes use of very different instruments." It implies collecting data, such as statistics, and producing discourses like war treaties or medical measures.
We will first present Michel Foucault’s work, his background as an historian and how it led him to understand the exercises of power. Indeed he developed the idea, drawing from the history of medicine that western states have gone from a security-based mode of gouvernance to a disciplinary one.
We will then go on demonstrating how his ideas can find many fields of application: if of course we think of practices of surveillance, we will go through all the features of biopolitics: health regulation, urban politics, but also security studies.

Sahanika Ratnayake
Parfit's Metaphysics of Personal Identity and the Non Identity Problem

The non-identity problem, first identified by Derek Parfit in part four of Reasons and Persons states that it is not possible to have a legitimate grievance about, or claim to have been wronged by an earlier act if that act also brought about our existence. Whilst the non-identity problem is largely discussed in the context of future generations, I believe it also crops up within Parfit's reductionist account of diachronic personal identity. As such, any attempt to take seriously the normative theory associated with Parfit's metaphysics needs to address the non-identity problem first.
I will briefly consider the possibility of resolving the problem with the same strategies used in the future generations case before identifying a class of actions for which some of these solutions are inadequate. I hope to develop an account of these acts (named rather ambitiously "acts of authorship") which explores both the justification for performing these acts and possible means of evaluating them, within a purely Parifitan metaphysical framework.

Stephanie Rennick
Macquarie and Glasgow
A time-traveller, a fortune-teller, and the banana-peel that foiled them.

Backwards time travel, at least at first glance, seems to allow for the possibility of bilking attempts - attempts to change the past and thereby engender a contradiction (as in the Grandfather Paradox) – and thus defenders of the possibility of time travel must account for why and how such attempts fail. Analogously, foreknowledge seems to allow for future-direct bilking attempts: that is, attempts to avoid a future that is known and thus in some sense fixed. This paper outlines the answers offered by Lewis and others as to why and how past-directed bilking attempts fail, and more importantly, argues that the same explanation can be offered in relation to the future: the future is just as immutable as the past, and the very same banana peels that trip up the would-be grandfather killer can foil the future-bilking foreknower. In either case, a slip, or a series of slips, will suffice.

Adriane Rini and Max Cresswell
Massey and Victoria
Tractarian Truthmaking

In recent years the view that truths need 'truthmakers' has become popular. Although not usually described in this way it is not hard to see Wittgenstein's Tractatus in these terms. The aim of the paper is to examine the Tractatus as looking at what it takes of a metaphysics to produce a theory of truthmaking. This is important to Wittgenstein's account of the nature of necessity, and its influence on Carnap and the logical positivists.

Hamish Russell
Does Nussbaum's Capabilities Approach Avoid the Repugnant Conclusion?

Derek Parfit’s Repugnant Conclusion states that, compared to any population of people with high qualities of life, it would be better for there to be a much larger population of people with lives that are barely worth living. This conclusion is implausible, but it proves difficult to avoid. A promising alternative is the Lexical View, which claims that, compared to a sufficient number of lives above some reasonably high level of quality, no number of lives below that level could be better. However, the Lexical View is open to compelling objections. In this paper I argue that we can answer these objections by endorsing features of Martha Nussbaum’s capabilities approach. More specifically, I propose that we cash out the Lexical View in terms of Nussbaum’s list of central capabilities, which she presents as the requirements for a life worthy of human dignity.

Matheson Russell

This paper identifies a philosophical paradigm or syndrome that I dub "second-personalism". Second-personalism valorizes the dialogical relationship between I and You (first and second person) and opposes this to the objectifying relationship between I and It (first and third person). The second-person standpoint is regarded as the core of moral experience, whereas the third-person standpoint is seen as excluded from this moral experience and thus cut off from the resources required to engage in moral reason and action. Second-personalism is embraced by Gadamer, Strawson, Habermas, Darwall and others, but it has its origins in Humboldt and arguably finds its purist expression in Buber. In this paper, as well as tracing the contours and origins of the second-personalist paradigm, I argue that second-personalism, although insightful and highly compelling in many respects, is hamstrung by its characteristic conflation of (i) the distinction between participant and observer standpoints within the speech situation (second- and third-person standpoints) and (ii) the distinction between and the communicative and strategic attitudes taken towards persons and things respectively. In the second half of the paper I sketch the outlines of an expanded communicative paradigm that goes beyond the dialogical relation to include a consideration of the third person as a feature of the speech situation. Some examples are given to illustrate why attending to the role of the third person in the speech situation will be indispensible if we are to making sense of certain basic kinds of social—especially legal and political—situations and institutions.

Vanessa Scholes
Open Polytechnic/Kuratini Tuwhera
Moral judgement in context: implications for experimental philosophy

Experimental philosophers use empirical methods to study lay people's judgements on issues of philosophical interest, including moral issues. Psychologists’ descriptions of the nature of moral judgement suggest our judgements are subject to unconscious contextual influences. Of particular importance are influences from interaction in a social context where we typically make such judgements. Responding to scenarios as a participant in an experiment is not obviously such a context. Is the experimental context sufficiently similar to any ‘real life’ context for us to suppose that it assesses something useful about any of our ‘real life’ moral judgements? This paper compares three broad contexts in which we might typically make a moral judgement in real life, and compares these with typical experimental contexts. I argue that experimental philosophy is likely to capture the expression of some ‘real life’ moral judgements - but unfortunately, only relatively trivial moral judgements.

Jeremy Seligman
Varieties of Group Knowledge

While the concepts of common knowledge (we know it and we all know that we all know it…) and distributed knowledge (were we to talk to each other, we would know it) are well-known to epistemologists and epistemic logicians, both ignore the role played by social relationships within our community. I will consider the effect of such relationships in structuring both the content and mode of access that we have to group knowledge, and show that consideration of this structure reveal many distinctions in the way in which knowledge can be shared, and how we reason about this.

Talia Sellars
Time Travel, Causation, and the Direction of Time

My paper will focus on the relationship between Time Travel, Causation, and the Direction of Time, and will be presented in two separate, but complementary, parts. In part one I will show how there can be backwards time travel without backwards causation, if we adopt a causal theory of the direction of time. In part two I will show why I think that discussions about *the direction of time* play a crucial role in time travel stories, and hence that it is the different views on the direction of time that result in the A-theorists/tensers, and the B-theorists/non-tensers, disagreeing about what time travel involves and which views it is possible on.

Paul Silva
Revisiting the Relationship Between Propositional and Doxastic Justification

If one has justification to believe P, whether or not one believes it, we say that one has "propositional justification to believe P". If one has a justified belief in P, we say that one has "doxastically justified belief in P." It is commonly thought that propositional justification is prior to doxastic justification, for it seems natural to explain the fact that one has a doxastically justified belief in P in terms of the fact that one has propositional justification to believe P and that one has based their belief in P on whatever their source of propositional justification for P happens to be. This is the traditional understanding of the relationship between propositional and doxastic justification.
John Turri (2010, Philosophy and Phenomenological Research) has argued that the traditional understanding of the relationship between propositional and doxastic justification falls to counterexamples, and he offers a new way of understanding what that relationship is. Turri's proposal threatens not only the traditional understanding of that relationship, but several other widely held epistemological theses. 
I defend the tradition, arguing that Turri's new way of understanding the relationship between propositional and doxastic justification not only makes the idea of propositional justification puzzling but it falls to counterexamples as well. I then argue that there is a way of diagnosing the problem cases Turri raises that do not require giving up on the traditional understanding of the relationship between propositional and doxastic justification.

Bob Simpson
Intrinsic Value and Intrinsic Valuation

The following two judgments – (i) that x is valuable, and (ii) that it’s right to value x – are biconditional counterparts. In other words, if it’s right to value x, this entails that x is valuable; and if x is valuable, this entails that it’s right to value x. In this paper I will argue that the biconditionality doesn’t carry over to intrinsic value. I will identify a family of cases in which it is right, fitting, and proper to intrinsically value some thing, x, but in which this doesn’t strictly entail that x is intrinsically valuable. And I will argue that in some such cases it’s possible for us to recognise these things about x’s value, and still to value x intrinsically, and to get things right in so doing. I will call these cases ‘constructive intrinsic valuations’. The difference between constructive intrinsic valuations and the two forms of intrinsic valuation that are widely-recognised in the extant literature on intrinsic value can be roughly summarised as follows. In a (fitting) Moorean intrinsic valuation it is unintelligible to fail to value x intrinsically. In a (fitting) Kantian intrinsic valuation, failing to value x intrinsically is intelligible, but there is an ethical prohibition against it. In a constructive intrinsic valuation, by contrast, it is both intelligible and permissible to fail to value x intrinsically, but such failure prevents x’s value from redounding to the valuer. Finally, I’ll argue that this abstruse metaethical point matters greatly in practical judgments concerning putative intrinsic value.

Nicholas Smith
Right-Makers and the Targets of Virtue

The still dominant account of right action in contemporary virtue ethics is that an action is right just in case a virtuous agent would characteristically perform that action. Yet it is dubious that the dominant account captures the right-makers of action – the features of an action in virtue of which it is right. Christine Swanton’s “target-centered” account (TC) is the virtue-ethical account of right action which, to date, shows the most promise in accounting for the right-makers of action.
I argue that there are both attractive and unattractive features of TC. After presenting an overview of TC, I identify the (attractive) feature of TC which makes it apt for accounting for the right-making features of actions. This feature is the claim that actions are (what I call) virtuous in their own right. If actions are virtuous in their own right, then the virtuousness of an action should not be analyzed in terms of that action’s bearing some relation (whatever it is) to virtuous agents or motivations. But I argue that TC’s specific interpretation of an action’s virtuousness sometimes implies reasons of the wrong kind for an action’s failing in regard to some virtue. Further, I argue that this feature of TC makes it fail to capture all right-making features of actions. After diagnosing the source of this failure, I conclude by pointing out what positive direction can be gathered for constructing a better alternative to TC.

Paul Smith
Libertarianism and Obligations to Future Generations

I will examine whether rights-based libertarian theories of justice can provide an adequate account of our moral obligations to future generations. The primary concern thus far has been whether libertarians can provide a coherent description of what these obligations might look like, but any robust account must also include an explanation of how they could be justly enforced. I will argue that, even if the former can be satisfied in some manner, libertarians cannot provide for the latter. Individuals have, according to Robert Nozick, the natural right to protect their own rights from infringement, and to extract compensation from those who cross that boundary. From this starting point, Nozick provides a complicated argument to justify the widespread suppression of this right and the creation of the minimal state. Even if this argument holds for present persons, I argue that it cannot hold for future generations. If this is the case, it would then be impermissible either for the state or a private individual to enforce the rights of future people against those in the present who infringe them. If this argument is successful, then it poses serious problems for any libertarian theory of justice.

Tiddy Smith
Mending Popper’s Corroboration

Popper defines a theory’s degree of corroboration as the degree to which that theory has withstood severe tests. He argues that we ought to prefer the theory with the highest degree of corroboration as the best candidate for the truth. Popper develops a corroboration formula that stands as a “mock rival” to confirmation theory. However, arguments from Lakatos and, more recently, Darrell Rowbottom show that Popper’s original corroboration formula is ultimately inapplicable. In this talk, I will present a new corroboration formula that attempts to avoid the problems identified by Lakatos and Rowbottom.

Trevor Smith
Schmitt Fellow, Marquette
Grounding Strong Obligations in Aristotle's Ethics

At first blush, the language of obligation used in conjunction with the Aristotelian moral framework might appear to be an unnatural mixing of moral theories. There are many who see virtue ethics as utterly unable to provide a sensible account of obligation and who use this perceived inability as the grounds of critique. For others, working within the virtue ethical tradition, it is believed that ethics should disengage with the language of obligation and, following Anscombe, call for the abandonment of the 'juridical ought' as a meaningful moral fencepost. This paper argues that Aristotelian virtue ethics does, in fact, advance a conception of strong obligations and does so without falling prey to the perils of either the 'juridical ought' or the abandonment of flourishing as a central moral value. Beginning with an exploration of Aristotle's arguments about categorically prohibited actions, as found in Books II and V of the Nicomachean Ethics, this paper moves to ground a conception of strong moral obligations in Aristotelian virtue ethics. Using Rosalind Hursthouse's schema of v-rules, it will be argued that the categorical prohibition of specific actions as found in virtue ethics works to establish a strong conception of obligation which is rooted in the ability of agents to achieve flourishing.

Chung-Chen Soong
Understanding Kendall Walton in the Dream

This paper interprets Kendall Walton’s claim that it is in imagination that we have psychological attitudes towards fictional characters. In part one, I introduce the realism and the irrealism regarding the ontological status of fictional characters. According to the realists, the ontological commitment to fictitious entities is the key to understanding fiction. In contrast, Walton, the irrealist, argues that the activity of imagining is more important, and the ontological commitment to fictitious entities is unnecessary. In part two, I explain Walton’s view of imagination. To do so, I approach from the cases of dreaming. Once we understand the nature of dreaming, we can go on to understand the nature of imagining, and realize why Walton insists that there is no need to postulate any “fictional realm.” In part three, I explain how Walton describes the apparent relationship between fictional characters and us. Again, I approach from the cases of dreaming. In the cases of dreaming, it is natural that when I wake up, I would say things like “It is in the dream that I am afraid of the monster.” And we have no trouble understanding the term “in the dream.” With this understanding in mind, we can proceed to the cases of imagining, and finally understand Walton’s claim that it is in imagination that we have psychological attitudes towards Superman, Anna Karenina, and so on.

Kim Sterelny and Benjamin Fraser
Evolution and Moral Realism

Humans are moralising and moralised apes. This difference between us and our relatives has received much attention in the evolutionary literature. Evolutionary accounts of morality have often been recruited in support of error theory. In this paper, we have three main aims: (i) to locate evolutionary error theory within the broader framework of the relationship between folk conceptions (or folk theories) of a domain and our best scientific conception of that same domain; (ii) within that broader framework, arguing that error theory vs. vindication (or reduction vs. elimination) are two ends of a (probably complex) continuum, and that in the light of our best science many folk conceptual structures are neither hopelessly wrong, nor vindicated; (iii) while there is no full vindication, or seamless naturalistic reduction, of normative facts to obviously mundane natural facts, one important strand in the evolutionary history of moral thinking supports reductive naturalism. Moral facts are facts about cooperation, and the conditions and practices that support or undermine it. In making this case, we first respond to the error theoretic argument that moral facts are explanatorily redundant, then make a case that true moral beliefs are a "fuel for success"; they are a map by which we steer, flexibly, in a variety of social interactions. The vindication, though, is at most partial. Moral cognition is a complex mosaic, with a complex genealogy, and only one thread in that genealogy is selection for truth-tracking. 

Christine Swanton
Pluralistic Virtue Ethics

To date virtue ethics has been “monistic” in the following sense. Most contemporary versions have been developments of predominately the Aristotelian conception of living well, notably Hursthouse (1999). More generally, there has not been a move to include several traditions in a (relatively) comprehensive pluralistic version of virtue ethics.
As in traditional virtue ethics the fundamental ethical concept is deemed to be living well, and at the core of living well is the possession and exercise of virtue. To live well, however, is a relatively thin concept which can be thickened in several different and highly contested ways, resulting in many types of virtue ethics.
A pluralistic virtue ethics responds in two ways to the multiplicity of conceptions of living well. First it recognizes that none of these conceptions of living well can be unified. Second it believes a plurality of conceptions of living well have strengths, and should take their place in an adequate comprehensive form of virtue ethics. In the view developed, both the grounds of virtue and forms of practical rationality associated with these grounds, are plural.

Koji Tanaka
Empirical Arguments for Paraconsistency

The late Robert Meyer and Graham Priest have (independently) presented the following line of argument for the invalidity of the classical inference ex contradiction quodlibet (explosion principle): according to classical logic anything follows from a contradiction, but it would be ludicrous to reason, and we just wouldn't reason, to an arbitrary claim from a contradiction; therefore, the classical inference is wrong. Their arguments rely on empirical methodology for determining validity/invalidity of logical principles. In this paper, I will examine their empirical arguments and defend their empirical methodology. I will demonstrate that logical principles can be thought to be an empirical matter.

Rebacca Tock
Examining the Role of Autonomy in the Medical Encounter

Being sick is common to the human experience; whether it is being a bit sniffly, a broken bone, or a more serious chronic disease, we can all sympathise with the feeling of being ill. It hinders our normal activities and can make us virtually useless as rational beings. Going to the doctor usually involves some sort of tacit, if not formal, consent to an intervention to get us fully functioning again. What this implies, however, is a very particular model of consent and autonomous decision making that prioritises a choice made by the individual (delirious from fever though you may be) without external influence. 
In this paper I will argue that the traditional, procedural model of autonomy that is used in medical ethics misses some important features of the experience of being ill and one’s ability to be autonomous. This model assumes that healthcare is centred around discreet, rational decisions that do not require context to be given normative value. I suggest that we cannot fully achieve autonomy via existing informed consent procedures if we assume the procedural account. Instead, it may be better to understand the medical encounter as involving a weak substantive or relational account of autonomy to fully capture the way we behave when we go to the doctor. 

Ryan Tonkens
Virtue and reasons for abandoning embryos

One issue that has emerged in the wake of the widespread cryopreservation of surplus embryos in the IVF context is that sometimes those embryos are never reclaimed, but are ‘abandoned’ by the patient(s) instead. While the abandonment of surplus embryos has been happening since the development of sophisticated freezing techniques, this issue is only recently beginning to be noticed as a problem for clinicians, usually with respect to what ought to be done with those embryos. 
The central focus of this paper, however, is on the ethics of the act of embryo abandonment. Here I systematically examine the reasons one may have for abandoning one’s embryos, and evaluate these reasons from a virtue-based perspective. One way to develop this account is to ask whether these reasons for abandonment could be in line with the moral and intellectual virtues central to the procreative and parenting enterprises (i.e. founding a family and parenting well), and the extent to which they demonstrate procreative and parental vices (e.g. callousness, inappropriate selfishness, cowardice, lack of integrity, weak-mindedness). I argue that there is good reason to believe that the act of embryo abandonment represents a less-than-virtuous character on behalf of the abandoner (towards the embryo, the clinic), qua procreator and prospective parent. While more philosophical and empirical work needs to be done on this new and pressing ethical issue, I close with a brief discussion of the implications for public policy in case embryo abandonment is deemed to be immoral. 

Liezl van Zyl and Ruth Walker
Contract motherhood as profession

The debate about the moral and legal problems surrounding contract motherhood has been constrained by the widespread and uncritical assumption that the only alternatives are the altruistic model and the commercial model. In a recent paper ("Beyond altruistic and commercial contract motherhood: The professional model", Bioethics, September 2013) we reject both these models and develop a third: the professional model. The main features of this model are that the contract mother, while motivated by altruism, receives compensation for her labour, and that the intending parents are recognised as the legal parents from the outset. Our aim in the present paper is to develop this model in more detail, and to demonstrate its advantages when applied to recent case studies.

Ruth Walker and Liezl van Zyl
Contract motherhood as profession

The debate about the moral and legal problems surrounding contract motherhood has been constrained by the widespread and uncritical assumption that the only alternatives are the altruistic model and the commercial model. In a recent paper ("Beyond altruistic and commercial contract motherhood: The professional model", Bioethics, September 2013) we reject both these models and develop a third: the professional model. The main features of this model are that the contract mother, while motivated by altruism, receives compensation for her labour, and that the intending parents are recognised as the legal parents from the outset. Our aim in the present paper is to develop this model in more detail, and to demonstrate its advantages when applied to recent case studies.

Sean Welsh
Robotic Deontic Logic: Ethical Governance of Autonomous Robots

As robots become increasingly autonomous, increasingly powerful and increasingly dangerous, calls have been made for a new interdisciplinary field of robot ethics (a.k.a. machine ethics, roboethics, computational morality, artificial morality) to be developed.
This paper starts by defining key terms. What is robot autonomy? Does a robot need it to be moral and to act moral? Questions of robotic 'being' and 'personhood' are dealt with briskly to permit focus on a viable near term implementation strategy for robot ethics. A custom dialect of robotic deontic logic that can run on Prover 9 (a freely downloadable mainstream theorem prover) is proposed. Differences between the robotic dialect proposed here, standard deontic logic and other deontic logics are explained. The key differences are practical. Robot sensors must be capable of establishing propositions used in axioms, theorems and proofs. Robot actuators must be capable of performing obligatory actions. The logic must integrate with the broader cognitive architecture of the robot and known paradoxes of deontic logic must be avoided.
The paper continues with a brief exposition of the Ethical Governor as outlined in Arkin (2009) for the use case of a lethal military drone configured to bomb tanks. It then provides a logical formalization of his system which can run on Prover 9 (at least for the more basic theorems and proofs).
Complications of Arkin's basic scenario are introduced and the logic modified to accommodate them. The resulting logic is then applied to a less controversial civilian use case, a robot bartender.

John Williams
Singapore Management University
Eliminativism, Dialethism and Moore's Paradox

John Turri gives an example that he thinks refutes 'G.E. Moore's view' that omissive assertions such as 'It is raining but I do not believe that it is raining' are 'inherently "absurd"', that of Ellie, an eliminativist who makes such assertions. Turri thinks that these are perfectly reasonable and not even absurd. Nor does she seem irrational if the sincerity of her assertion requires her to believe its content. A commissive counterpart of Ellie is Di, a dialetheist who asserts or believes that The Russell set includes itself but I believe that it is not the case that the Russell set includes itself. Since any adequate explanation of Moore's paradox must handle commissive assertions and beliefs as well as omissive ones, it must deal with Di as well as engage Ellie. I give such an explanation. I argue that neither Ellie's assertion nor her belief is irrational yet both are absurd. Likewise neither Di's assertion nor her belief is irrational yet in contrast neither is absurd. I conclude that not all Moore-paradoxical assertions or beliefs are irrational and that the syntax of Moore's examples is neither necessary nor sufficient for Moore-paradoxicality.

Daniel Wilson
Artefacts and Functions

In this paper I examine artefacts and artefact functions. I examine artefacts in a broad sense that includes chairs and tables as well as oral histories, emails and bitcoins. This paper is divided roughly into two parts. In the first part I provide an analysis of artefacts. I argue that an adequate account of artefacts must include the ineliminable role of collective recognition in the success conditions for artefact status. The concept of artefacts is also inextricably tied to the notion of function. In the second part I analyse the concept of functions. Some complex artefacts are constructed from multiple elements. This leads to consideration of the conditions under which parts of artefacts are themselves artefacts. Finally, I consider whether parts of artefacts also have functions.

Andrew Withy
If philosophers understood conditionals, they wouldn't do (some) metaphysics

Conditionals do not fall into two neat categories: indicative and counterfactual. Most of them are not even truth-conditional. Much modern metaphysical reasoning uses conditionals in a way that flies in the face of these linguistic facts. I will provide a sketchy classification of natural language conditionals based on functional, typological, and cognitive linguistics. Then I will review a few areas of metaphysics where conditionals are often used, such as natural laws, and dispositions, to see what remains of the philosophical claims. This will help to illuminate what conditionals convey when they are understood as linguistic entities rather than short-hand for formal models; that is, when they are spoken in natural language.

Julian Young
Wake Forest
Richard Wagner on the original Gesamtkunstwerk

Together with his friend, Mikhail Bakunin, Richard Wagner took and active part in the 1848 Revolution as a self-declared anarchist-communist. During this period, he wrote a number of theoretical works outlining the place of art, and above all theatre, in the post-revolutionary society. The central artwork, he argued, would need to be a revival of the original Gesamtkunstwerk, the Greek tragic festival. This paper looks in detail at Wagner's conception of the 'collective artwork', an artwork which 'collects' not just the arts but also the community. It was on this conception that the earlier part of the Ring cycle - in particular Das Rheingold - was based.