Fast-paced advancements in the field of artificial intelligence, or AI, are proving the technology is an indispensable asset. In the national security field, experts are charting a course for AIs impact on our collective defense strategy.
Paulo Shakarian is at the forefront of this critical work using his expertise in symbolic AI and neuro-symbolic systems, which are advanced forms of AI technology, to meet the sophisticated needs of national security organizations.
Shakarian, an associate professor of computer science in the School of Computing and Augmented Intelligence, part of the Ira A. Fulton Schools of Engineering at Arizona State University, has been invited to attend AI Forward, a series of workshops hosted by the U.S. Defense Advanced Research Projects Agency, or DARPA.
The event includes two workshops: a virtual meeting that took place earlier this summer and an in-person event in Boston from July 31 to Aug. 2.
Shakarian is among 100 attendees working to advance DARPAs initiative to explore new directions for AI research impacting a wide range of defense-related tasks, including autonomous systems, intelligence platforms, military planning, big data analysis and computer vision.
At the Boston workshop, Shakarian will be joined by Nakul Gopalan, an assistant professor of computer science, who was also selected to attend the event to explore how his research in human-robot communication might help achieve DARPAs goals.
In addition to his involvement in AI Forward, Shakarian is preparing to release a new book in September 2023. The book, titled Neuro-symbolic Reasoning and Learning, will explore the past five years of research in neuro-symbolic AI and help readers understand recent advances in the field.
As Shakarian and Gopalan prepared for workshops, they took a moment to share their research expertise and thoughts on the current landscape of AI.
Explain your research areas. What topics do you focus on?
Paulo Shakarian: My primary focus is symbolic AI and neuro-symbolic systems. To understand them, its important to talk about what AI looks like today, primarily as deep learning neural networks, which have been a wonderful revolution in technology over the last decade. Looking at problems specifically relevant to the U.S. Department of Defense, or DoD, these AI technologies were not performing well. There are several challenges, including black box models and their explainability, systems not being inherently modular because theyre trained end-to-end, and the enforcement of constraints to help avoid collisions and interference when multiple aircrafts share the same airspace. With neural networks, theres no inherent way in the system to enforce constraints. Symbolic AI has been around longer than neural networks, but it is not data-driven, while neural networks are and can learn symbols and repeat them back. Traditionally, symbolic AIs abilities have not been demonstrated anywhere near the learning capacity of a neural network, but all the issues Ive mentioned are shortcomings of deep learning that symbolic AI can address. When you start to get into these use cases that have significant safety requirements, like in defense, aerospace and autonomous driving, there is a desire to leverage a lot of data while accounting for safety constraints, modularity and explainability. The study of neuro-symbolic AI uses a lot of data with those other parameters in mind.
Nakul Gopalan: I focus on the area of language grounding, planning and learning from human users for robotic applications. I attempt to use demonstrations that humans provide to teach AI systems symbolic ideas, like colors, shapes, objects and verbs, and then map language to these symbolic concepts. In that regard, I also develop neuro-symbolic approaches to teaching AI systems. Additionally, I work in the field of robot learning, which involves implementing learning policies to help robots discover how to solve specific tasks. Tasks can range from inserting and fastening bolts in airplane wings to understanding how to model an object like a microwave so a robot can heat food. Developing tools in these large problem areas in machine learning and artificial intelligence can enable robots to solve problems with human users.
Tell me about your research labs. What research are you currently working on?
PS: The main project Ive been working on in my lab, Lab V2, is a software package we call PyReason. One of the practical results of the neural network revolution has been really great software like PyTorch and TensorFlow, which streamline a lot of the work of making neural networks. Google and Meta put considerable effort into these pieces of software and made them free to everyone. Weve noticed in neuro-symbolic literature that everyone is reinventing the wheel, in a sense, by creating a new subset of logic for their particular purposes. Much of this work already has copious amounts of literature previously written on it. In creating PyReason, my collaborators and I wanted to create the best possible logic platform for working with machine learning systems. We have about three or four active grants with it, and people have been downloading it, so it has been our primary work. We wanted to create a very strong piece of software to enable this research, so you dont have to keep reimplementing old bits of logic. This way its all there, its mature and relatively bug-free.
NG: My lab, the Logos Robotics Lab, focuses on teaching robots a human approach to learning and solving tasks. We also work on representations for task solving to understand how robots can model objects so they can solve the tasks we need robots to solve. Like learning how to operate a microwave, for example, and understanding how to open its door and put an object inside. We use machine learning techniques to discover robots behavior and focus on teaching robots tasks from human users to sample efficient machine learning methods. Our team learns about object representations such as modeling microwaves, toasters and pliers to understand how robots can use them. One concept we work on is tactile sensing, which helps to recognize objects and use them for solving tasks by touch. We do all this with a focus on integrating these approaches with human coworker use cases so we can demonstrate the utility of these learning systems in the presence of a person working alongside the robot. Our work touches practical problems in manufacturing and socially relevant problems, such as introducing robots into domains like assisted living and nursing.
What initially drew you to engineering and drove you to pursue work in this field?
PS: I had an interesting journey to get to this point. Right out of high school, I went to the United States Military Academy at West Point, graduated, became a military officer and was in the U.S. Armys 1st Armored Division. I had two combat tours in Iraq, and after my second combat tour, my unit sent me on a three-month temporary assignment to DARPA as an advisor because I had combat experience and a technical degree a bachelors degree in computer science. At DARPA, I learned how some of our nations top scientists were applying AI to solve relevant defense problems and became very interested in both intelligence and autonomy. Being trained in military intelligence, Ive worked in infantry and armor units to understand how intelligence assets were supporting the fight, and I saw that the work being done at DARPA was lightyears beyond what I was doing manually. After that, I applied to a special program to go back to graduate school and earned my doctoral degree, focusing on AI. As part of that program, I also taught for a few years at West Point. After completing my military service, I joined the faculty at ASU in 2014.
NG: I have been curious about learning systems related to control and robotic applications since my undergraduate degree studies. I was impressed by the capability of these systems to adapt to a human users needs. As for what drew me to engineering, I was always fascinated by math and even competed in a few math competitions in high school. A career in engineering was a way for me to pursue this interest in mathematics for practical applications. A common reason for working in computer science research is its similarity to the mathematics field. The computer science field can solve open-ended theoretical problems while producing practical applications of this theoretical research. Our work in the School of Computing and Augmented Intelligence embodies these ideals.
Theres so much hysteria and noise in the media about AI. Speaking as professional researchers in this field, are we near any truly useful applications that are going to be game changers for life in various industries?
PS: Yes, I think so. Weve already seen what convolutional neural networks did for image recognition and how that has been embedded in everything from phones to security cameras and more. Were going to see a very similar phenomenon going on with large language models. The models have problems, and the main one is a concept called hallucinations, which means the models give the wrong answers or information. We also cant have any strong safety guarantees with large language models if you cant explain where the results came from, which is the same problem with every other neural model. Companies like Google and OpenAI are doing a lot of testing to mitigate these potential issues that could come out, but theres no way they could test every possible case. With that said, I expect to see things like the context window, or the amount of data you can put in a prompt, expand with large language models in the next year. Thats going to help improve both the training and use of these models. There have been a lot of techniques introduced in the past year that will significantly improve the accuracy in everyday use cases, and I think the public will see a very low error rate. Large language models are crucial in generating computer code, and thats likely to be the most game-changing, impactful result. If we can write code faster, we can inherently innovate faster. Large language models are going to help researchers continue to act as engines of innovation, particularly here in the U.S. where these tools are readily available.
Large language models are crucial in generating computer code, and thats likely to be the most game-changing, impactful result. If we can write code faster, we can inherently innovate faster.
NG: Progress in machine learning has been meteoric. We have seen the rise of generative models for language, images, videos and music in the last few years. There are already economic consequences of these models, which were seeing in industries such as journalism, writing, software engineering, graphic design, law and finance. We may one day see fewer of these kinds of jobs as our efficiency in pursuing this advancement increases, but there are still questions about the accuracy and morality of using such technology and its lasting social and economic impacts. There is some nascent understanding of the physical world in these systems, but they are still far from being efficient when collaborating with human users. I think this technology will change the way we function in society just as introducing computers changed the type of jobs people aspire toward, but researchers are still focused on developing the goal of artificial general intelligence, which is Ai that understands the physical world and functions independently in it. We are still far from such a system, although we have developed impressive tools along the way.
Do you think AIs applications in national security will ever get to a point where the public sees this technology in use, such as the autonomous vehicles being tested on roads in and around Phoenix, or do you think it will stay behind the scenes?
PS: When I ran my startup company, I learned that it was important for AI to be embedded in a solution that everyone understands on a daily basis. Even with autonomous vehicles, the only difference is that theres no driver in the drivers seat. The goal is to get these vehicles to behave like normal cars. But the big exception to all of this is ChatGPT, which has turned the world on its head. Even with these technologies, I have a little bit of doubt that our current interface will be the way we interact with these types of AI going forward, and the people at OpenAI agree.
I see further development in the future to better integrate technology like ChatGPT into a normal workflow. We all have tools we use to get work done, and there are always small costs associated. With ChatGPT, theres the cost of flipping to a new window, logging into the program and waiting for it to respond. If youre using it to craft an email thats only a few sentences long, it may not feel worth it, and then you dont think of this as a tool to make an impact as often as you should. If ChatGPT were more integrated into processes, I think use of it would be different. Its such a compelling technology and I think thats why they were able to release it in this very simple, external chat format.
NG: We use a significant amount of technology developed for national security for public purposes, in applications from the internet to GPS devices. As technology becomes more accessible, it continues to be declassified and used in public settings. I expect the same will happen for most such research products developed by DARPA.
Link:
ASU researchers bridge security and AI - Full Circle
- Signal and noise: how timing measurements and AI are improving ... - ATLAS Experiment at CERN [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Elon Musk Hints at Finalizing Tesla FSD V12 Code, Needs More ... - autoevolution [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Research on key acoustic characteristics of soundscapes of the ... - Nature.com [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Fast Simon Launches Vector Search With Advanced AI for ... - GlobeNewswire [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- The TALOS-AI4SSH project: Expanding research and innovation ... - Innovation News Network [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Industry 4.0: The Transformation of Production - Fagen wasanni [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Spatial attention-based residual network for human burn ... - Nature.com [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Is running AI on CPUs making a comeback? - TechHQ [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- AI's Transformative Impact on Industries - Fagen wasanni [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Simulation analysis of visual perception model based on pulse ... - Nature.com [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Tuning and Optimizing Your Neural Network | by Aye Kbra ... - DataDrivenInvestor [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Portrait of intense communications within microfluidic neural ... - Nature.com [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- New Optical Neural Network Filters Info before Processing - RTInsights [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- The Future of Telecommunications: 3D Printing, Neural Networks ... - Fagen wasanni [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Types of Neural Networks in Artificial Intelligence - Fagen wasanni [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- The Evolution of Artificial Intelligence: From Turing to Neural Networks - Fagen wasanni [Last Updated On: August 6th, 2023] [Originally Added On: August 6th, 2023]
- Using Photonic Neurons to Improve Neural Networks - RTInsights [Last Updated On: August 6th, 2023] [Originally Added On: August 6th, 2023]
- Distributed constrained combinatorial optimization leveraging hypergraph neural networks - Nature.com [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- Neurotechnology: auditory neural networks mimic the human brain - Hello Future Orange - Hello Future [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]