Shaping the AI Revolution In Philosophy (guest post) – Daily Nous

Posted: July 12, 2021 at 7:52 am

Despite the great promise of AI, we maintain that unless philosophers theorize about and help develop philosophy-specific AI, it is likely that AI will not be as philosophically useful.

In the following guest post*, Caleb Ontiveros, a philosophy graduate student-turned-software engineer and writer, and Graham Clay, a recent philosophy PhD from the University of Notre Dame, discuss the possibility of AI providing a suite of tools that could revolutionize philosophy, and why its important that philosophers help develop and theorize about the role of AI in philosophy.

Philosophy will be radically different in future centuriesperhaps decades. The transformative power of artificial intelligence is coming to philosophy and the only question is whether philosophers will harness it. In fact, we contend that artificial intelligence (AI) tools will have a transformative impact on the field comparable to the advent of writing.

The impact of the written word on philosophy cannot be overstated. Imagine waking up and learning that, due to a freak cosmic accident, all books, journal articles, notebooks, blogs, and the like had vanished or been destroyed. In such a scenario, philosophy would be made seriously worse off. Present philosophers would instantly suffer a severe loss and future philosophers would be impoverished as a consequence.The introduction of writing freed philosophers from being solely dependent on their own memory and oral methods of recollection. It enabled philosophers to interact with other thinkers across time, diminishing the contingent influences of time and space, thereby improving the transmission of ideas. Philosophers could even learn about others approaches to philosophy, which in turn aided them in their own methodology.

It is our position that AI will provide a suite of tools that can play a similar role for philosophy. It is likely that this will require that philosophers help develop and theorize about it.

What is AI? There are, roughly, two kinds of AI: machine reasoning and machine learning systems. Machine reasoning systems are composed of knowledge bases of sentences, inference rules, and operations on them. Machine learning systems work by ingesting a large amount of data and learning to make accurate predictions from it. GPT-3 is an example of thissee discussion here. One can think of the first kind of systemmachine reasoning AIas a deductive and symbolic reasoner and the secondmachine learning AIas learning and implementing statistical rules about the relationships between entities like words.

Like many other technologies, these sorts of systems are expected to continue to progress over the coming decades and are likely to be exceptionally powerful by the end of the century. We have seen exceptional progress so far, the cost of computing power continues to fall, and numerous experts have relatively short timelines. How fast the progress will be is clouded in uncertainty technological forecasting is a non-trivial affair. But it is not controversial that there will likely be significant progress soon.

This being the case, it will be technologically possible to create a number of AI tools that would each transform philosophy:

It is unlikely that these systems will replace human philosophers any time soon, but philosophers who effectively use these tools would significantly increase the quality and import of their work.

One can envision Systematizing systems that encode the ideas expressed by the Stanford Encyclopedia of Philosophy into propositions and the arguments they compose. This would enable philosophers to see connections between various positions and arguments, thereby reducing the siloing that has become more common in the field in recent years. Similar tools could parse and formalize journal papers from the 20th century that are seldom engaged with mining them for lost insights relevant to current concerns. Simulation tools would generate new insights, as when one asks of the tool What would Hume think about the Newcomb problem? Imagine a tool like GPT-3 but one that is better at constructing logical arguments and engaging in discussions. Relatedly, one can envisage a Reasoning system that encodes the knowledge of the philosophical community as a kind of super agent that others can interact with, extend, and learn from, like Alexa on steroids.

Despite the great promise of AI, we maintain that unless philosophers theorize about and help develop philosophy-specific AI, it is likely that AI will not be as philosophically useful.

Let us make this concrete with a specific philosophical tool: Systematizing. A Systematizing tool would encode philosophical propositions and relations between them, such as support, conditional likelihood, or entailment. Philosophers may need to work with computer scientists to formulate the propositions and score the relations that the Systematizing tool generates, as well as learn how to use the system in a way that produces the most philosophically valuable relations. It is likely that Systematizing tools and software will be designed with commercial purposes in mind and so they will not immediately port over to philosophical use cases.

Extending this, we can imagine a human-AI hybrid version of this tool that takes submissions in a manner that journals do, but instead of submitting a paper, philosophers would submit its core content: perhaps a set of propositions, the relations between them, and the relevant logic or set of inference rules. The ideal construction of this tool clearly needs to be thought through. If it were done well, it could power a revolutionary Reasoning system.

However the AI revolution turns out, philosophical inquiry will be radically different in the future. But the details and epistemic values can be shaped by contributions now. Were happy to see some of this work, and we hope to see more in the future.

Excerpt from:

Shaping the AI Revolution In Philosophy (guest post) - Daily Nous

Related Posts