The search has found 104701 titles
A new hypothesis on the basic features characterising the Foundations of Mathematics (FoM) is suggested. By means of it the several proposals for discovering the FoM, launched around the year 1900, are characterised. It is well-known that the historical evolution of these proposals was marked by some notorious failures and conflicts. After the failures of Frege’s and Cantor’s programs owing to the discoveries of respectively an antinomy and internal contradictions, the more radical ones - i.e. Hilbert’s and Brouwer’s - generated a great debate; this fact is here explained as caused by a phenomenon of mutual incommensurability, defined by means of the differences in their foundational features. The ignorance of this phenomenon gives reason of the inconclusiveness of one century debate between the advocates of these two proposals. Which however have been improved so much to unwarily approach or even recognize some basic features of the FoM. Yet, no one proposal recognized the alternative basic feature to Hilbert’s main one, the deductive (axiomatic) organization of a theory, although already half century before the births of all the programs this alternative was substantially instantiated by Lobachevsky’s theory on parallel lines. Some conclusive considerations of historical and philosophical nature are offered. In particular, it is stressed the conclusive birth of a pluralism in the FoM.
The first part of this paper analyses the core tenets of the "new mechanistic philosophy" in view of the growth, over the last twenty years, of scientific literature on the mechanistic paradigm, especially within molecular biology and the neurosciences. This analysis yields an image of living organisms as systems which are mechanical, complex and selffounding. The second part of this paper deals with a core tenet of the mechanistic paradigm, namely that the explanation of a biological phenomenon coincides with the explanation of the mechanisms which constitute the phenomenon itself; this idea is criticised with reference to the contextualisation of the epistemological status of biology within Michael Polanyi’s theory of personal knowledge. The comparison between Polanyi’s thought and the mechanistic paradigm exposes the weaknesses of both: on the one hand, Polanyi’s theory of knowledge highlights aspects of the biologist’s work that are overlooked by mechanistic conceptions; on the other hand, the idea that biology includes a kind of knowledge which is tacit or untranslatable entails a series of problems for scientific research. In order to avoid these problems, the concept of mechanism must be re-examined, especially from a methodological point of view.
In the past few years, behavioural, neuroimaging and neurophysiological studies have been suggesting that Embodied Simulation represents a constitutive feature of language understanding. However, this claim is still controversial, as is the definition of Embodied Simulation. In this paper, I aim at providing a more suitable definition of Embodied Simulation. I will then apply this definition to the study of bodily metaphors. Embodied Simulation gets us attuned with our social world and it provides us with both a brain and bodily disposition, which is the starting point of many cognitive processes. Exploitation of the mechanism of simulation is particularly evident in the linguistic phenomenon of metaphors. Bodily metaphors are often so successful because they exploit this mechanism of brain and bodily attunement, enacted by means of Embodied Simulation. The role of Embodied Simulation and its importance for metaphor comprehension can be explained in two points: (1) Embodied Simulation allows speakers to share a bodily attitude during communicative exchanges; (2) by means of Embodied Simulation speakers directly experience the source domain during metaphorical mapping.
Prendendo spunto da due articoli, scritti da Bottaccioli e Villani, vengono discussi il modello riduzionista e il modello sistemico e si cerca di evidenziare che essi devono venire integrati, piuttosto che contrapposti, se si tiene conto del concetto di relazione. La relazione, che viene valorizzata dal modello sistemico, si pone solo poggiando sui termini, che vengono valorizzati dal modello riduzionista, e questi ultimi, però, per presentare un’identità determinata non possono non relazionarsi, giacché determinare è differenziare. Così intesa, cioè come costrutto mono-diadico, la relazione si rivela tuttavia un circolo, perché si costituisce presupponendo quegli stessi termini che la presuppongono. Si conclude affermando la necessità di intenderla non come uno status, ma come l’atto del riferirsi di ogni determinato, così che anche l’integrazione dei due modelli indicati va intesa non come una relazione che mantiene i termini, ma come il loro reciproco superarsi in un’unità di senso che li ricomprenda e risignifichi.
A truthmaker solution to the Gettier problems is based on the idea that knowledge can be defined as justified true belief provided that the source of one’s justification is suitably connected with what makes the believed proposition true. Different developments of this basic intuition have been recently criticized on the basis of a series of arguments aiming at showing that no truthmaker theory can allow us to solve Gettier problems, since the very idea underlying such solution is ineffective. In this paper, I discuss the criticism to the truthmaker solution I consider most promising and show how it can be successfully addressed.
Proponiamo una interpretazione "ontica" dello schema modale KD45. "Ontica" sta per interpretazione causale (fisica e metafisica) della relazione di accessibilità tra i mondi. KD45 con le sue proprietà di transitività, riflessività e simmetricità secondarie, esprime e traduce efficacemente la struttura "fondazionale" della metafisica della partecipazione di Tommaso d’Aquino, nel suo aspetto dinamico che è la causalità. Definiamo inoltre la contro-implicazione stretta causale per formalizzare la relazione reale effetto-causa, e attraverso di essa proponiamo: 1- un assioma di fondazione che a) lega l’essere ente, all’avere essere da parte di un "datore di essere"; b) "costruisce" l’universo degli enti e c) dà una condizione di appartenenza ad esso diversa dalla consueta autoidentità . 2- un assioma di causalità secondaria, che formalizza la condizione di possibilità che enti diversi possano esercitare l’un l’altro un’azione causale. 3- un assioma di genere, che formalizza la condizione causale di appartenenza di genere.
The thesis that in social sciences causal explanations are possible only in terms of mechanisms due to the lack of genuine laws has been increasingly popular among social scientists and philosophers. In this article it is examined whether the explanation by mechanisms is necessarily an explanation without laws or, on the contrary, it can involve some kind of laws. To this end it is argued, firstly, that mechanisms are not always the antonym of laws insofar as they express propensities and so tendencies; secondly, that these tendencies are causal and entail capacities and dispositions; thirdly, that capacities and dispositions involved in human behaviour have to face the problem of free will; and finally, reasons are offered in favor of considering causal tendencies as authentic laws. The ultimate aim of this article is to demonstrate that explanation of social facts can involve well-established laws in its explanans, although they are not universal laws.
Scientific realism (Putnam 1975; Psillos 1999) and relative realism (Mizrahi 2013) claim that successful scientific theories are approximately true and comparatively true, respectively. A theory is approximately true if and only if it is close to the truth. A theory is comparatively true if and only if it is closer to the truth than its competitors are. I argue that relative realism is more skeptical about the claims of science than it initially appears to be and that it can explain neither the success nor the failure of science. Hence, it is not a promising competitor to scientific realism.
Truth had been excluded from the requirements of science after the so-called "foundational crisis" of the exact sciences (mathematics and physics) occurred between the end of the 19th and the beginning of the 20th century. A formalistic outlook had imposed itself in the philosophy of science, from which meaning and truth were excluded. This approach, however, was seriously weakened after the discovery of the "internal limitations of formalisms" entailed by Goedel’s theorems, and Tarski almost at the same time advocated the legitimacy of meaning and truth for the formalized languages, calling "semantics" this part of the metatheoretical investigations. This terminology has remained standard especially in mathematical logic. One must note, however, that semantics regards in a proper sense the level of meaning, whereas truth implies in addition the reference of the language to some extralinguistic domain of entities. This domain is not accessible by means of logical, linguistic or conceptual analysis, but can be attained through "operations" of some concrete kind, whose nature determines also the ontological status of the referents. Operations belong to praxis, and this is why the notion of truth is more properly attributed to "pragmatics", understood not in the original Morris’ sense, but rather in a sense closer to pragmatism, in which the performance of actions is considered essential for providing criteria of truth.