Rationality, markets, and morals: RMM Band 2 (2011)
Dauerhafte URI für die Sammlung
Stöbern nach
Neue Veröffentlichungen
Item Marc Fleurbaey: Fairness, Responsibility, and Welfare(2011) Tutic, AndreasItem Low Assumptions, High Dimensions(2011) Wasserman, LarryThese days, statisticians often deal with complex, high dimensional datasets. Researchers in statistics and machine learning have responded by creating many new methods for analyzing high dimensional data. However, many of these new methods depend on strong assumptions. The challenge of bringing low assumption inference to high dimensional settings requires new ways to think about the foundations of statistics. Traditional foundational concerns, such as the Bayesian versus frequentist debate, have become less important.Item Do Self-Committers Mind Other-Imposed Commitment? An Experiment on Weak Paternalism(2011) Uhl, MatthiasThis experiment investigates whether protégés judge paternalism by means of its consequences or on principled grounds. Subjects receive a payment for showing up early the next morning. The later they show up the less they get. Protégés can self-commit to a specific show-up time or maintain spontaneity. By making this binary choice, protégés express their preference for liberty. Simultaneously, their patron is either paternalistic or liberal by making this choice on their behalf. We investigate whether self-committers’ willingness to restrict their own freedom predicts their attitudes toward paternalism.Item Foundational Issues in Statistical Modeling: Statistical Model Specification and Validation(2011) Spanos, ArisStatistical model specification and validation raise crucial foundational problems whose pertinent resolution holds the key to learning from data by securing the reliability of frequentist inference. The paper questions the judiciousness of several current practices, including the theory-driven approach, and the Akaike-type model selection procedures, arguing that they often lead to unreliable inferences. This is primarily due to the fact that goodness-of-fit/prediction measures and other substantive and pragmatic criteria are of questionable value when the estimated model is statistically misspecified. Foisting one's favorite model on the data often yields estimated models which are both statistically and substantively misspecified, but one has no way to delineate between the two sources of error and apportion blame. The paper argues that the error statistical approach can address this Duhemian ambiguity by distinguishing between statistical and substantive premises and viewing empirical modeling in a piecemeal way with a view to delineate the various issues more effectively. It is also argued that Hendry's general to specific procedures does a much better job in model selection than the theory-driven and the Akaike-type procedures primary because of its error statistical underpinnings.Item You May Believe You Are a Bayesian But You Are Probably Wrong(2011) Senn, StephenAn elementary sketch of some issues in statistical inference and in particular of the central role of likelihood is given. This is followed by brief outlines of what George Barnard considered were the four great systems of statistical inferences. These can be thought of terms of the four combinations of two factors at two levels. The first is fundamental purpose (decision or inference) and the second probability argument (direct or inverse). Of these four systems the `fully Bayesian' approach of decision-making using inverse probability particularly associated with the Ramsay, De Finetti, Savage and Lindley has some claims to be the most impressive. It is claimed, however, and illustrated by example, that this approach seems to be impossible to follow. It is speculated that there may be some advantage to the practising statistician to follow George Barnard's advice of being familiar with all four systems.Item Sen’s Apples: Commitment, Agent Relativity and Social Norms(2011) Peacock, Mark S.This paper examines Amartya Sen’s notion of ‘commitment’ in light of Geoffrey Brennan’s recent discussion thereof. Its aim is to elucidate one type of commitment which consists in following social norms. To this end, I discuss Sen’s ‘apples’ example from his ‘Rational fools’ essay (section 2). In section 3, I draw some implications of commitments in Sen’s work for the concept of ‘agent relativity’. Section 4 discusses the distinctiveness of Sen’s conception of human beings in their supposed ability to be able to bind themselves to following social norms at the expense of their own benefit.Item Statistical Science and Philosophy of Science: Where Do/Should They Meet in 2011 (and Beyond)?(2011) Mayo, Deborah G.Item Empirical Economic Model Discovery and Theory Evaluation(2011) Hendry, David F.Economies are so high dimensional and non-constant that many features of models cannot be derived by prior reasoning, intrinsically involving empirical discovery and requiring theory evaluation. Despite important differences, discovery and evaluation in economics are similar to those of science. Fitting a pre-specified equation limits discovery, but automatic methods can formulate much more general initial models with many possible variables, long lag lengths and non-linearities, allowing for outliers, data contamination, and parameter shifts; then select congruent parsimonious-encompassing models even with more candidate variables than observations, while embedding the theory; finally rigorously evaluate selected models to ascertain their viability.Item Induction and Deduction in Bayesian Data Analysis(2011) Gelman, AndrewThe classical or frequentist approach to statistics (in which inference is centered on significance testing), is associated with a philosophy in which science is deductive and follows Popper's doctrine of falsification. In contrast, Bayesian inference is commonly associated with inductive reasoning and the idea that a model can be dethroned by a competing model but can never be directly falsified by a significance test. The purpose of this article is to break these associations, which I think are incorrect and have been detrimental to statistical practice, in that they have steered falsificationists away from the very useful tools of Bayesian inference and have discouraged Bayesians from checking the fit of their models. From my experience using and developing Bayesian methods in social and environmental science, I have found model checking and falsification to be central in the modeling process.Item Automatic Actions: Challenging Causalism(2011) Di Nucci, EzioI argue that so-called automatic actions—routine performances that we successfully and effortlessly complete without thinking such as turning a door handle, downshifting to 4th gear, or lighting up a cigarette—pose a challenge to causalism, because they do not appear to be preceded by the psychological states which, according to the causal theory of action, are necessary for intentional action. I argue that causalism cannot prove that agents are simply unaware of the relevant psychological states when they act automatically, because these content-specific psychological states aren’t always necessary to make coherent rational sense of the agent’s behaviour. I then dispute other possible grounds for the attribution of these psychological states, such as agents’ own self-attributions. In the final section I introduce an alternative to causalism, building on Frankfurt’s concept of guidance.Item Item The Limits of Market Efficiency(2011) Buchanan, James M.The framework rules within which either market or political activity takes place must be classified in the non-partitionability set under the Samuelson taxonomy. Therefore there is nothing comparable to the profit-loss dynamic of the market that will insure any continuing thrust toward more desirable rules. ‘Public choice’ has at least partially succeeded in getting economists to remove the romantic blinders toward politics and politicians as providers of non-partitionable goods. It is equally necessary to be hard-nosed in evaluating markets as providers of non-partitionable rules.