Isaac Newton apocryphally discovered his second law the one about gravity after an apple fell on his head. Much experimentation and data analysis later, he realised there was a fundamental relationship between force, mass and acceleration. He formulated a theory to describe that relationship one that could be expressed as an equation, F=ma and used it to predict the behaviour of objects other than apples. His predictions turned out to be right (if not always precise enough for those who came later).
Contrast how science is increasingly done today. Facebooks machine learning tools predict your preferences better than any psychologist. AlphaFold, a program built by DeepMind, has produced the most accurate predictions yet of protein structures based on the amino acids they contain. Both are completely silent on why they work: why you prefer this or that information; why this sequence generates that structure.
You cant lift a curtain and peer into the mechanism. They offer up no explanation, no set of rules for converting this into that no theory, in a word. They just work and do so well. We witness the social effects of Facebooks predictions daily. AlphaFold has yet to make its impact felt, but many are convinced it will change medicine.
Somewhere between Newton and Mark Zuckerberg, theory took a back seat. In 2008, Chris Anderson, the then editor-in-chief of Wired magazine, predicted its demise. So much data had accumulated, he argued, and computers were already so much better than us at finding relationships within it, that our theories were being exposed for what they were oversimplifications of reality. Soon, the old scientific method hypothesise, predict, test would be relegated to the dustbin of history. Wed stop looking for the causes of things and be satisfied with correlations.
With the benefit of hindsight, we can say that what Anderson saw is true (he wasnt alone). The complexity that this wealth of data has revealed to us cannot be captured by theory as traditionally understood. We have leapfrogged over our ability to even write the theories that are going to be useful for description, says computational neuroscientist Peter Dayan, director of the Max Planck Institute for Biological Cybernetics in Tbingen, Germany. We dont even know what they would look like.
But Andersons prediction of the end of theory looks to have been premature or maybe his thesis was itself an oversimplification. There are several reasons why theory refuses to die, despite the successes of such theory-free prediction engines as Facebook and AlphaFold. All are illuminating, because they force us to ask: whats the best way to acquire knowledge and where does science go from here?
The first reason is that weve realised that artificial intelligences (AIs), particularly a form of machine learning called neural networks, which learn from data without having to be fed explicit instructions, are themselves fallible. Think of the prejudice that has been documented in Googles search engines and Amazons hiring tools.
The second is that humans turn out to be deeply uncomfortable with theory-free science. We dont like dealing with a black box we want to know why.
And third, there may still be plenty of theory of the traditional kind that is, graspable by humans that usefully explains much but has yet to be uncovered.
So theory isnt dead, yet, but it is changing perhaps beyond recognition. The theories that make sense when you have huge amounts of data look quite different from those that make sense when you have small amounts, says Tom Griffiths, a psychologist at Princeton University.
Griffiths has been using neural nets to help him improve on existing theories in his domain, which is human decision-making. A popular theory of how people make decisions when economic risk is involved is prospect theory, which was formulated by behavioural economists Daniel Kahneman and Amos Tversky in the 1970s (it later won Kahneman a Nobel prize). The idea at its core is that people are sometimes, but not always, rational.
In Science last June, Griffithss group described how they trained a neural net on a vast dataset of decisions people took in 10,000 risky choice scenarios, then compared how accurately it predicted further decisions with respect to prospect theory. They found that prospect theory did pretty well, but the neural net showed its worth in highlighting where the theory broke down, that is, where its predictions failed.
These counter-examples were highly informative, Griffiths says, because they revealed more of the complexity that exists in real life. For example, humans are constantly weighing up probabilities based on incoming information, as prospect theory describes. But when there are too many competing probabilities for the brain to compute, they might switch to a different strategy being guided by a rule of thumb, say and a stockbrokers rule of thumb might not be the same as that of a teenage bitcoin trader, since it is drawn from different experiences.
Were basically using the machine learning system to identify those cases where were seeing something thats inconsistent with our theory, Griffiths says. The bigger the dataset, the more inconsistencies the AI learns. The end result is not a theory in the traditional sense of a precise claim about how people make decisions, but a set of claims that is subject to certain constraints. A way to picture it might be as a branching tree of if then-type rules, which is difficult to describe mathematically, let alone in words.
What the Princeton psychologists are discovering is still just about explainable, by extension from existing theories. But as they reveal more and more complexity, it will become less so the logical culmination of that process being the theory-free predictive engines embodied by Facebook or AlphaFold.
Some scientists are comfortable with that, even eager for it. When voice recognition software pioneer Frederick Jelinek said: Every time I fire a linguist, the performance of the speech recogniser goes up, he meant that theory was holding back progress and that was in the 1980s.
Or take protein structures. A proteins function is largely determined by its structure, so if you want to design a drug that blocks or enhances a given proteins action, you need to know its structure. AlphaFold was trained on structures that were derived experimentally, using techniques such as X-ray crystallography and at the moment its predictions are considered more reliable for proteins where there is some experimental data available than for those where there is none. But its reliability is improving all the time, says Janet Thornton, former director of the EMBL European Bioinformatics Institute (EMBL-EBI) near Cambridge, and it isnt the lack of a theory that will stop drug designers using it. What AlphaFold does is also discovery, she says, and it will only improve our understanding of life and therapeutics.
Others are distinctly less comfortable with where science is heading. Critics point out, for example, that neural nets can throw up spurious correlations, especially if the datasets they are trained on are small. And all datasets are biased, because scientists dont collect data evenly or neutrally, but always with certain hypotheses or assumptions in mind, assumptions that worked their way damagingly into Googles and Amazons AIs. As philosopher of science Sabina Leonelli of the University of Exeter explains: The data landscape were using is incredibly skewed.
But while these problems certainly exist, Dayan doesnt think theyre insurmountable. He points out that humans are biased too and, unlike AIs, in ways that are very hard to interrogate or correct. Ultimately, if a theory produces less reliable predictions than an AI, it will be hard to argue that the machine is the more biased of the two.
A tougher obstacle to the new science may be our human need to explain the world to talk in terms of cause and effect. In 2019, neuroscientists Bingni Brunton and Michael Beyeler of the University of Washington, Seattle, wrote that this need for interpretability may have prevented scientists from making novel insights about the brain, of the kind that only emerges from large datasets. But they also sympathised. If those insights are to be translated into useful things such as drugs and devices, they wrote, it is imperative that computational models yield insights that are explainable to, and trusted by, clinicians, end-users and industry.
Explainable AI, which addresses how to bridge the interpretability gap, has become a hot topic. But that gap is only set to widen and we might instead be faced with a trade-off: how much predictability are we willing to give up for interpretability?
Sumit Chopra, an AI scientist who thinks about the application of machine learning to healthcare at New York University, gives the example of an MRI image. It takes a lot of raw data and hence scanning time to produce such an image, which isnt necessarily the best use of that data if your goal is to accurately detect, say, cancer. You could train an AI to identify what smaller portion of the raw data is sufficient to produce an accurate diagnosis, as validated by other methods, and indeed Chopras group has done so. But radiologists and patients remain wedded to the image. We humans are more comfortable with a 2D image that our eyes can interpret, he says.
The final objection to post-theory science is that there is likely to be useful old-style theory that is, generalisations extracted from discrete examples that remains to be discovered and only humans can do that because it requires intuition. In other words, it requires a kind of instinctive homing in on those properties of the examples that are relevant to the general rule. One reason we consider Newton brilliant is that in order to come up with his second law he had to ignore some data. He had to imagine, for example, that things were falling in a vacuum, free of the interfering effects of air resistance.
In Nature last month, mathematician Christian Stump, of Ruhr University Bochum in Germany, called this intuitive step the core of the creative process. But the reason he was writing about it was to say that for the first time, an AI had pulled it off. DeepMind had built a machine-learning program that had prompted mathematicians towards new insights new generalisations in the mathematics of knots.
In 2022, therefore, there is almost no stage of the scientific process where AI hasnt left its footprint. And the more we draw it into our quest for knowledge, the more it changes that quest. Well have to learn to live with that, but we can reassure ourselves about one thing: were still asking the questions. As Pablo Picasso put it in the 1960s, computers are useless. They can only give you answers.
Read more here:
Are we witnessing the dawn of post-theory science? - The Guardian
- Machine learning provides a new picture of the great gray owl - Phys.org - April 2nd, 2024
- What is Machine Learning? Definition, Types, Tools & More - April 2nd, 2024
- Revolutionizing Industries: The Convergence of RFID, AI, and Machine Learning - yTech - April 2nd, 2024
- Layerwise Importance Sampled AdamW (LISA): A Machine Learning Optimization Algorithm that Randomly Freezes Layers of LLM Based on a Given Probability... - April 2nd, 2024
- Dimensionality reduction for images of IoT using machine learning | Scientific Reports - Nature.com - April 2nd, 2024
- 3 Machine Learning Stocks That Could Be Multibaggers in the Making: March Edition - InvestorPlace - April 2nd, 2024
- Researchers use machine learning to improve the taste of Belgian beers Physics World - physicsworld.com - April 2nd, 2024
- PM Modi Emphasizes The Importance Of Incorporating AI & Machine Learning To Enhance Digital Infra - Business Today - April 2nd, 2024
- Accurate and rapid antibiotic susceptibility testing using a machine learning-assisted nanomotion technology platform - Nature.com - March 21st, 2024
- Machine Learning Accelerates the Simulation of Dynamical Fields - Eos - March 21st, 2024
- Quantum Machine Learning: Exploring the Intersection of New Frontiers - DataScientest - March 21st, 2024
- Advancements in Pancreatic Cancer Detection: Integrating Biomarkers, Imaging Technologies, and Machine Learning ... - Cureus - March 21st, 2024
- Google Health Researchers Propose HEAL: A Methodology to Quantitatively Assess whether Machine Learning-based Health Technologies Perform Equitably -... - March 21st, 2024
- A change in the machine learning landscape - InfoWorld - March 21st, 2024
- Informing immunotherapy with multi-omics driven machine learning | npj Digital Medicine - Nature.com - March 21st, 2024
- Crypto Entities That Neglect AI and Machine Learning Investment Will Lag Behind, Warns Binance CTO Bitcoin News - Bitcoin.com News - March 21st, 2024
- MIT Researchers Developed an Image Dataset that Allows Them to Simulate Peripheral Vision in Machine Learning Models - MarkTechPost - March 21st, 2024
- BurstAttention: A Groundbreaking Machine Learning Framework that Transforms Efficiency in Large Language Models with Advanced Distributed Attention... - March 21st, 2024
- A machine learning system to identify progress level of dry rot disease in potato tuber based on digital thermal image ... - Nature.com - January 24th, 2024
- Mind the Gap Machine Learning, Dataset Shift, and History in the Age of Clinical Algorithms | NEJM - nejm.org - January 24th, 2024
- Cracking the Business Code of Clusters Machine Learning Times - The Machine Learning Times - January 24th, 2024
- Machine-learning-based models found to have predictive abilities no better than chance in out-of-sample evaluations - 2 Minute Medicine - January 24th, 2024
- Hybrid machine learning method boosts resolution of electrical impedance tomography - Tech Xplore - January 24th, 2024
- Cow moos and burps to be monitored using machine learning - FoodNavigator.com - January 24th, 2024
- Enhancing foveal avascular zone analysis for Alzheimer's diagnosis with AI segmentation and machine learning using ... - Nature.com - January 24th, 2024
- How to Use AI and Machine Learning for Academic Research - Innovation & Tech Today - January 24th, 2024
- Smart Use of Machine Learning Algorithms: Beyond the Hype, Into Real-World Solutions - Medium - January 24th, 2024
- How A.I./Machine Learning Is Boosting the Diversity of U.S. Med Students and Americas Future Doctors - Higher Education Digest - January 24th, 2024
- Weekly AiThority Roundup: Biggest Machine Learning, Robotic And Automation Updates - AiThority - January 24th, 2024
- How to Develop and Deploy Machine Learning Project in Python - Analytics Insight - January 24th, 2024
- Machine learning education | TensorFlow - January 7th, 2024
- How LinkedIn Uses Machine Learning to Address Content-Related Threats and Abuse - InfoQ.com - January 7th, 2024
- What is AI and Machine Learning? - GovernmentCIO Media & Research - January 7th, 2024
- Overview: Machine Learning Specialization by Andrew Ng (Course 1) - Medium - January 7th, 2024
- Study uses new tools, machine learning to investigate major cause of blindness in older adults - Medical Xpress - January 7th, 2024
- Leveraging AI and Machine Learning on AWS | by Be | Jan, 2024 - Medium - January 7th, 2024
- The Future at the Intersection of AI, Machine Learning, and Data Science - Medriva - January 7th, 2024
- Navigating the AI Landscape: From Machine Learning Foundations to Multimodal Advancements - Medium - January 7th, 2024
- Brake Noise And Machine Learning (3 of 4) - The BRAKE Report - January 7th, 2024
- 'Local' machine learning promises to cut the cost of AI development in 2024 - ITPro - January 7th, 2024
- Voice Recognition with Machine Learning on Arduino Nano 33 BLE Sense - Medium - January 7th, 2024
- This Paper from MIT and Microsoft Introduces LASER: A Novel Machine Learning Approach that can Simultaneously Enhance an LLMs Task Performance and... - January 7th, 2024
- How to Choose the Right Advanced Certification Program in AI & Machine Learning - TechGraph - January 7th, 2024
- What Is Machine Learning? | A Beginner's Guide - Scribbr - November 17th, 2023
- AI vs. Machine Learning vs. Deep Learning vs. Neural Networks ... - IBM - January 30th, 2023
- The Latest Google Research Shows how a Machine Learning ML Model that Provides a Weak Hint can Significantly Improve the Performance of an Algorithm... - January 30th, 2023
- What Is Machine Learning and Why Is It Important? - January 22nd, 2023
- Achieving Next-Level Value From AI By Focusing On The Operational Side Of Machine Learning - Forbes - January 22nd, 2023
- UCLA Researcher Develops a Python Library Called ClimateLearn for Accessing State-of-the-Art Climate Data and Machine Learning Models in a... - January 22nd, 2023
- Alto Neuroscience Presents New Data Leveraging EEG and Machine Learning to Predict Individual Response to Antidepressants at the 61st Annual Meeting... - December 12th, 2022
- Apple has released a Set of Optimizations that allow the Stable Diffusion AI Image Generator to be used on Apple Silicon, making use of Core ML,... - December 12th, 2022
- Genomic Testing Cooperative to Present Data at the American Society of Hematology Meeting on New Applications of its Proprietary Tests that Combine... - December 12th, 2022
- Astronomers at Caltech Have Used a Machine Learning Algorithm to Classify 1,000 Supernovae Completely Autonomously - MarkTechPost - December 4th, 2022
- Deep Learning | NVIDIA Developer - November 25th, 2022
- Check Out This Tool That Uses Machine Learning To Animate 3D Models In Real-Time And Will Soon Be Compatible With Unreal Engine - MarkTechPost - November 17th, 2022
- The NFT World is Evolving, and That's No Secret. Machine Learning and Algorithmic Tools ... - Latest Tweet - LatestLY - October 23rd, 2022
- Its Not Just About Accuracy - Five More things to Consider for a Machine Learning Model - AZoM - October 15th, 2022
- Machine learning operations offer agility, spur innovation - MIT Technology Review - October 15th, 2022
- Machine learning to predict the development of recurrent urinary tract infection related to single uropathogen, Escherichia coli | Scientific Reports... - October 15th, 2022
- The more data, the more deep learning capacity - Innovation Origins - October 15th, 2022
- Outlook on the Machine Learning in Life Sciences Global Market to 2027 - Featuring Alteryx, Anaconda, Canon Medical Systems and Imagen Technologies... - October 15th, 2022
- Forensic Discovery Taps Reveal-Brainspace to Bolster its Analytics, AI and Machine Learning Capabilities - Business Wire - October 15th, 2022
- Long-term exposure to particulate matter was associated with increased dementia risk using both traditional approaches and novel machine learning... - October 15th, 2022
- Machine Learning | Google Developers - October 7th, 2022
- Machine Learning in Oracle Database | Oracle - October 7th, 2022
- Learning on the edge | MIT News | Massachusetts Institute of Technology - MIT News - October 7th, 2022
- Study: Few randomized clinical trials have been conducted for healthcare machine learning tools - Mobihealth News - October 7th, 2022
- The Worldwide Industry for Machine Learning in the Life Sciences is Expected to Reach $20.7 Billion by 2027 - ResearchAndMarkets.com - Business Wire - October 7th, 2022
- Dominos MLops release focuses on GPUs and deep learning, offers multicloud preview - VentureBeat - October 7th, 2022
- MLOps Company Iterative Sees Steady Growth in First Half of 2022 - Business Wire - October 7th, 2022
- Machine learning tool could help people in rough situations make sure their water is good to drink - ZME Science - October 7th, 2022
- Developing Machine-Learning Apps on the Raspberry Pi Pico - Design News - October 7th, 2022
- Arctoris welcomes on board globally recognized experts in Machine Learning, Chemical Computation, and Alzheimer's Disease - Business Wire - October 7th, 2022
- Machine vision breakthrough: This device can see 'millions of colors' - Northeastern University - October 7th, 2022
- RBI plans to extensively use artificial intelligence, machine learning to improve regulatory supervision - ETCIO - October 7th, 2022
- Artificial intelligence may improve suicide prevention in the future - EurekAlert - October 7th, 2022
- Google turns to machine learning to advance translation of text out in the real world - TechCrunch - September 29th, 2022
- Machine learning has predicted the winners of the Worlds - CyclingTips - September 29th, 2022
- Peking University released the first open-source dataset for machine learning applications in fast chip design - EurekAlert - September 29th, 2022
- Predicting the effects of winter water warming in artificial lakes on zooplankton and its environment using combined machine learning models |... - September 29th, 2022