The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Daily Archives: April 17, 2022
Faces created by AI are more trustworthy than real ones – EL PAS in English
Posted: April 17, 2022 at 11:39 pm
Artificial intelligence (AI) is now capable of creating plausible images of people, thanks to websites such as This person does not exit, as well as non-existent animals, and even rooms for rent.
The similarity between the mocked-up images and reality is such that it has been flagged up by research published in Proceedings of the National Academy of Sciences of the United States of America (PNAS), which compares real faces with those created through an AI algorithm known as generative adversarial networks (GANs).
Not only does the research conclude that it was hard to distinguish between the two, it also shows that fictitious faces generate more trust than real ones. Although the difference is slight at just 7.7%, the faces that were considered least trustworthy were real while three of the four believed to be most trustworthy were fictitious. Sophie Nightingale, Professor of Psychology at Lancaster University and co-author of the study, says the result wasnt something the team were expecting: We were quite surprised, she says.
The research was based on three experiments. In the first experiment, the 315 participants asked to distinguish between real and synthesized faces scored an accuracy rate of 48.2%. In the second experiment, the 219 different participants were given tips on distinguishing synthetic faces from real ones with feedback on their guesses. The percentage of correct answers in the second experiment was slightly higher than in the first, with an accuracy rate of 59%. In both cases, the faces that were most difficult to classify were those of white people. Nightingale believes that this discrepancy is due to the fact that the algorithms are more used to this type of face, but that this will change with time
In the third experiment, the researchers wanted to go further and gauge the level of trust generated by the faces and check whether the synthetic ones triggered the same levels of confidence. To do this, 223 participants had to rate the faces from 1 (very untrustworthy) to 7 (very trustworthy). In the case of the real faces, the average score was 4.48 versus 4.82 for the synthetic faces. Although the difference amounts to just 7.7%, the authors stress that it is significant. Of the 4 most trustworthy faces, three were synthetic, while the four that struck the participants as least trustworthy were real.
All the faces analyzed belonged to a sample of 800 images, half real and half fake. The synthetic half was created by GANs. The sample was composed of an equal number of men and women of four different races: African American, Caucasian, East Asian and South Asian. They matched the actual faces with a synthetic one of a similar age, gender, race and general appearance. Each participant analyzed 128 faces.
Jos Miguel Fernndez Dols, professor of Social Psychology at the Autonomous University of Madrid whose research focuses on facial expression, points out that not all faces have the same expression or posture and that this can affect judgments. The study takes into account the importance of facial expression and assumes that a smiling face is more likely to be rated more trustworthy. However, 65.5% of the real faces and 58.8% of the synthetic faces were smiling, so facial expression alone cannot explain why synthetics are rated as more trustworthy.
The researcher also considers the posture of three of the images rated less trustworthy to be critical; pushing the upper part of the face forward with respect to the mouth and protecting the neck, is a posture that frequently precedes aggression. Synthetic faces are becoming more realistic and can easily generate more trust by playing with several factors: the typical nature of the face, features and posture, says Fernndez Dols.
In addition to the creation of synthetic faces, Nightingale predicts that other types of artificially created content, such as videos and audios, are on their way to becoming indistinguishable from real content. It is the democratization of access to this powerful technology that poses the most significant threat, she says. We also encourage reconsideration of the often-laissez-faire approach to the public and unrestricted releasing of code for anyone to incorporate into any application.
To prevent the proliferation of non-consensual intimate images, fraud and disinformation campaigns, the researchers propose guidelines for the creation and distribution of synthesized images to protect the public from deep fakes.
But Sergio Escalera, professor at the University of Barcelona and member of the Computer Vision Center, highlights the positive aspect of AI generated faces: It is interesting to see how faces can be generated to transmit a friendly emotion, he says, suggesting this could be incorporated into the creation of virtual assistants or used when a specific expression needs to be conveyed, such as calm, to people suffering from a mental illness.
According to Escalera, from an ethical point of view, it is important to expose AIs potential and, above all, to be very aware of the possible risks that may exist when handing that over to society. He also points out that current legislation is a little behind technological progress and there is still much to be done.
Continued here:
Faces created by AI are more trustworthy than real ones - EL PAS in English
Posted in Ai
Comments Off on Faces created by AI are more trustworthy than real ones – EL PAS in English
The new frontiers of AI and robotics, with CMU computer science dean Martial Hebert – GeekWire
Posted: at 11:39 pm
Martial Hebert, dean of the Carnegie Mellon University School of Computer Science, during a recent visit to the GeekWire offices in Seattle. (GeekWire Photo / Todd Bishop)
This week on the GeekWire Podcast, we explore the state of the art in robotics and artificial intelligence with Martial Hebert, dean of the Carnegie Mellon University School of Computer Science in Pittsburgh.
A veteran computer scientist in the field of computer vision, Hebert is the former director of CMUs prestigious Robotics Institute. A native of France, he also had the distinguished honor of being our first in-person podcast guest in two years, visiting the GeekWire offices during his recent trip to the Seattle area.
As youll hear, our discussion doubled as a preview of a trip that GeekWires news team will soon be making to Pittsburgh, revisiting the city that hosted our temporary GeekWire HQ2 in 2018, and reporting from the Cascadia Connect Robotics, Automation & AI conference, with coverage supported by Cascadia Capital.
Continue reading for excerpts from the conversation, edited for clarity and length.
Listen below, or subscribe to GeekWire in Apple Podcasts, Google Podcasts, Spotify or wherever you listen.
Why are you here in Seattle? Can you tell us a little bit about what youre doing on this West Coast trip?
Martial Hebert: We collaborate with a number of partners and a number of industry partners. And so this is the purpose of this trip: to establish those collaborations and reinforce those collaborations on various topics around AI and robotics.
It has been four years since GeekWire has been in Pittsburgh. What has changed in computer science and the technology scene?
The self-driving companies Aurora and Argo AI are expanding quickly and successfully. The whole network and ecosystem of robotics companies is also expanding quickly.
But in addition to the expansion, theres also a greater sense of community. This is something that has existed in the Bay Area and in the Boston area for a number of years. What has changed over the past four years is that our community, through organizations like the Pittsburgh Robotics Network, has solidified a lot.
Are self-driving cars still one of the most promising applications of computer vision and autonomous systems?
Its one very visible and potentially very impactful application in terms peoples lives: transportation, transit, and so forth. But there are other applications that are not as visible that can be also quite impactful.
For example, things that revolve around health, and how to use health signals from various sensors those have profound implications, potentially. If you can have a small change in peoples habits, that can make a tremendous change in the overall health of the population, and the economy.
What are some of the cutting-edge advances youre seeing today in robotics and computer vision?
Let me give you an idea of some of the themes that I think are very interesting and promising.
It seems like that first theme of sensing, understanding and predicting human behavior could be applicable in the classroom, in terms of systems to sense how students are interacting and engaging. How much of that is happening in the technology that were seeing these days?
Theres two answers to that:
But what is important is that its not just AI. Its not just computer vision. Its technology plus the learning sciences. And its critical that the two are combined. Anything that tries to use this kind of computer vision, for example, in a naive way, can be actually disastrous. So its very important that that those disciplines are linked properly.
I can imagine thats true across a variety of initiatives, in a bunch of different fields. In the past, computer scientists, roboticists, people in artificial intelligence might have tried to develop things in a vacuum without people who are subject matter experts. And thats changed.
In fact, thats an evolution that I think is very interesting and necessary. So for example, we have a large activity with [CMUs Heinz College of Information Systems and Public Policy] in understanding how AI can be used in public policy. What you really want is to extract general principles and tools to do AI for public policy, and that, in turn, converts into a curriculum and educational offering at the intersection of the two.
Its important that we make clear the limitations of AI. And I think theres not enough of that, actually. Its important even for those who are not AI experts, who do not necessarily know the technical details of AI, to understand what AI can do, but also, importantly, what it cannot do.
[After we recorded this episode, CMU announced a new cross-disciplinary Responsible AI Initiative involving the Heinz College and the School of Computer Science.]
If you were just getting started in computer vision, and robotics, is there a particular challenge or problem that you just couldnt wait to take on in the field?
A major challenge is to have truly comprehensive and principled approaches to characterizing the performance of AI and machine learning systems, and evaluating this performance, predicting this performance.
When you look at a classical engineered system whether its a car or an elevator or something else behind that system theres a couple of hundred years of engineering practice. That means formal methods formal mathematical methods, formal statistical methods but also best practices for testing and evaluation. We dont have that for AI and ML, at least not to that extent.
Thats basically this idea of going from the components of the system, all the way to being able to have characterization of the entire end-to-end system. So thats a very large challenge.
I thought you were going to say, a robot that could get you a beer while youre watching the Steelers game.
This goes to what I said earlier about the limitations. We still dont have the support to handle those components in terms of characterization. So thats where Im coming from. I think thats critical to get to the stage where you can have the beer delivery robot be truly reliable and trustworthy.
See Martial Heberts research page for more details on his work in computer vision and autonomous systems.
Edited and produced by Curt Milton, with music by Daniel L.K. Caldwell.
Read the original post:
The new frontiers of AI and robotics, with CMU computer science dean Martial Hebert - GeekWire
Posted in Ai
Comments Off on The new frontiers of AI and robotics, with CMU computer science dean Martial Hebert – GeekWire
Russia’s Artificial Intelligence Boom May Not Survive the War – Defense One
Posted: at 11:39 pm
The last year was a busy one for Russias military and civilian artificial intelligence efforts. Moscow poured money into research and development, and Russias civil society debated the countrys place in the larger AI ecosystem. But Vladimir Putins invasion of Ukraine in February and the resulting sanctions have brought several of those efforts to a haltand thrown into question just how many of its AI advancements Russia will be able to salvage and continue.
Ever since Putin extolled the development of robotic combat systems in the new State Armaments Program in 2020, the Russian Ministry of Defense has been hyper-focused on AI. We have learned more about the Russian militarys focus on AI in the past year thanks to several public revelations.
But talk of AI has been muted since the Russian invasion of Ukraine. Apart from the widespread use of UAVs for reconnaissance and target acquisition and a single display of a mine-clearing robotall of which are remote-controlledthere is no overt evidence of Russian AI in C4ISR or decision-making among the Russian military forces, other than a single public deepfake attempt to discredit the Ukrainian government. That does not mean AI isnt used, considering how Ukrainians are now utilizing artificial intelligence in data analysisbut there is a notable absence of larger discussion about this technology in open-source Russian media.
The gap between Russian military aspirations for high-tech warfare of the future and the actual conduct of war today is becoming clear. In January 2021, Colonel-General Vladimir Zarudnitsky, the head of the Military Academy of the Russian Armed Forces General Staff, wrote that the development and use of unmanned and autonomous military systems, the robotization of all spheres of armed conflict, and the development of AI for robotics will have the greatest medium-term effect on the Russian armed forces ability to meet their future challenges. Other MOD military experts also debated the impact of these emerging technologies on the Russian military and future balance of forces. Russia continued to upgrade and replace Soviet-made systems, part of the MODs drive from digitization (weapons with modern information technologies for C4ISR) to intellectualization (widespread implementation of AI capable of performing human-like creative thinking functions). These and other developments were covered in detail during Russias Army-2021 conference, with AI as a key element in C4ISR at the tactical and strategic levels.
Meanwhile, Russian military developers and researchers worked on multiple AI-enabled robotics projects, including the Marker concept unmanned ground vehicle and its autonomous operation in groups and with UAVs.
Toward the end of 2021, the state agency responsible for exporting Russian military technology even announced plans to offer unmanned aviation, robotics, and high-tech products with artificial intelligence elements to potential customers this year. The agency emphasized the equipment is geared toward defensive, border protection, and counter-terrorism capabilities.
Since the invasion, things have changed. Russias defense-industrial complexespecially military high-tech and AI research and developmentmay be affected by the international sanctions and cascading effects of Russia being cut off from semi-conductor and microprocessor imports.
Throughout 2021, the Russian government was pushing for the adoption of its AI civilian initiatives across the country, such as nationwide hackathons aimed at different age groups with the aim of making artificial intelligence familiar at home, work, and school. The government also pushed for the digital transformation of science and higher education, emphasizing the development of AI, big data, and the internet of things.
Russian academic AI R&D efforts drove predictive analytics; development of chat bots that process text and voice messages and resolve user issues without human intervention; and technologies for working with biometric data. Russias development of facial recognition technology continued apace, with key efforts implemented across Moscow and other large cities. AI as a key image recognition and data analytical tool was used in many medical projects and efforts dealing with large data sets.
Russian government officials noted their countrys efforts in promoting the ethics of artificial intelligence, and expressed confidence in Russias continued participation in this UN-sponsored work. The Russian Council for the Development of the Digital Economy has officially called for a ban on artificial intelligence algorithms that discriminate against people.
Russias Ministry of Economic Development was asked to "create a mechanism for assessing the humanitarian impact of the consequences of the introduction of such [AI] technologies, including in the provision of state and municipal services to citizens," and to prepare a "road map" for effective regulation, use, and implementation. According to the council, citizens should be able to appeal AI decisions digitally, and such a complaint should only be considered by a human. The council also proposed developing legal mechanisms to compensate for damage caused as a result of AI use.
In October, Russias leading information and communications companies adopted the National Code of Ethics in the Field of AI; the code was recommended for all participants in the AI market, including government, business, Russian and foreign developers. Among the basic principles in the code are a human-centered approach to the development of this technology and the safety of working with data.
AI workforce development was spelled out as a key requirement when the government officially unveiled the national AI roadmap in 2019. A 2021 government poll that tried to gauge the level of confidence in the governments AI efforts showed that only about 64 percent of domestic AI specialists were satisfied with the working conditions in Russia.
The survey reflected the microcosm of AI research, development, testing, and evaluation in Russialots of government activity and different efforts that did not automatically translate into a productive ecosystem conducive for developing AI, some major efforts notwithstanding.
Among some of the reasons in 2021 that Russia was lagging behind in the development of artificial intelligence technologies were the personnel shortage and the weakness of the venture capital market. The civilian developer community also noted the low penetration of Russian products into foreign markets, dependence on imports, slow introduction of products into business and government bodies, and a weak connection between AI theory and practice.
Russias likely plans to concentrate on these areas in 2022 were revised or put on hold once Russia invaded Ukraine. The sudden pull-out of major IT and high-tech companies from Russia, coupled with a rapid brain drain of Russias IT workers, and the ever-expanding high-tech sanctions against the Russian state may hobble domestic AI research and development for years to come. While the Russian government is trying to prop up its AI and high-tech industry with subsidies, funding, and legislative support, the impact of the above-mentioned consequences may be too much for the still-growing and evolving Russian AI ecosystem. That does not mean AI research and development will stopon the contrary, many 2021 trends, efforts, and inventions are being implemented into the Russian economy and society in 2022, and there are domestic high-tech companies and public-private partnerships which are trying to fill the void left by the departed global IT majors. But the effects of the invasion will be felt in the AI ecosystem for a long time, especially with so many IT workers leaving the country, either because of the massive impact on the high-tech economy, or because they disagree with the war, or both.
One of the most-felt sanctions aftereffects has been the severing of international cooperation on AI among Russian universities and research instructions, which earlier was enshrined as one of the most important drivers for domestic AI R&D, and reinforced by support from the Kremlin. For most high-tech institutions around the world, the impact of civilian destruction across Ukraine by the Russian military greatly outweighs the need to engage Russia on AI. At the same time, much of the Russian military AI R&D took place in a siloed environmentin many cases behind a classified firewall and without significant public-private cooperationso its hard to estimate just how sanctions will affect Russian military AI efforts.
While many in Russia now look to China as a substitute for departed global commercial relationships and products, its not clear if Beijing could fully replace the software and hardware products and services that left Russian markets at this point.
Recent events may not stop Russian civilians and military experts from discussing how AI influences the conduct of war and peacebut the practical implementation of these deliberations may become increasingly more difficult for a country under global high-tech isolation.
Samuel Bendett is an Adjunct Senior Fellow at the Center for a New American Security and an Adviser at the CNA Corporation.
Read more here:
Russia's Artificial Intelligence Boom May Not Survive the War - Defense One
Posted in Ai
Comments Off on Russia’s Artificial Intelligence Boom May Not Survive the War – Defense One
Top 100 Fastest-Growing AI Teams: Key Players, Exclusive Insights – Forbes
Posted: at 11:39 pm
Top 100 Fastest-Growing AI Teams
To maintain their competitive advantage, enterprises will need to scale their AI capabilities quickly and effectively. Here are the 100 companies with the fastest-growing AI teams, according to this analysis of data from Vectice.
The report studied the 2,500 largest US companies and found that 965 have an AI team of greater than 20 employees. 541 companies that had an AI team of over 20 employees experienced positive growth in their AI team over the past 3 months.
In terms of the number of new hires, Microsoft led the pack, with nearly 500 AI experts hired over the past 3 months. This represents a 8-fold lead over the second-fastest company, Kaiser Permanente.
Other companies in the top 10 for number of new hires include Optum, CVS Health, USAA, General Motors, Ford Motor Company, Robert Half, Walmart, and Bloomberg.
Overall, the fastest-growing teams were found in the technology and healthcare industries, followed by retail and finance/insurance. This indicates that enterprises across a variety of industries are seeing the value in AI and are investing in the technology. This article zooms into the top 10 companies charging the way, as well as an overview of the companies in the top 100.
Top 10 Fastest-Growing AI Teams By New Hires
New hires in the AI space have been growing at a fast clip, with the top 10 companies adding dozens of new AI experts over the past 3 months.
Fastest growing by overall number of new hires
Fastest growing by overall number of new hires
Source: Top 100 Fastest-Growing AI Teams, Vectice
Number of top companies by percentage in healthcare, finance/insurance, tech or retail and CPG
Source: Top 100 Fastest-Growing AI Teams, Vectice
Companies like Microsoft, Kaiser Permanente, and Optum are leading the way in terms of AI team growth. These companies specifically are in the healthcare, insurance, and technology industries respectively. Their focus on AI team growth indicates that they understand the importance of the technology and are investing in it to maintain their competitive advantage.
In terms of the top 10 fastest growing companies by percentage of growth with initial AI team size of 20-49 over the past three months, Concentrix, GameStop and Moody's Corporation take the top 3 picks. Each reflects the IT and Services, Retail and Financial Services industries, respectively.
Top 10 fastest growing companies in percentage of growth with an initial AI team size of 20-49 over the past 3 months
Top 10 fastest growing companies in percentage of growth with an initial AI team size of 20-49 over ... [+] the past 3 months
Source: Top 100 Fastest-Growing AI Teams, Vectice
In terms of the top 10 fastest growing companies by percentage of growth with initial AI team size of 50-250 over the past three months, ResMed, Northside Hospital and General Mills make up the top three. The trend is that healthcare and retail are two of the biggest industries represented. These industries have larger AI team sizes for a number of factors: they're collecting more data than ever before, and the complexity of the data requires more AI expertise to make sense of it.
Top 10 fastest growing companies in percentage of growth with an initial AI team size of 50-250 over the past 3 months
Top 10 fastest growing companies in percentage of growth with an initial AI team size of 50-250 over ... [+] the past 3 months
Source: Top 100 Fastest-Growing AI Teams, Vectice
When it comes to the top 10 fastest growing companies by percentage of growth with initial AI team size of +250 over the past three months, Microsoft, Robert Half, and Optum take the top three spots. This includes companies from a variety of industries: technology, staffing, and healthcare.
Top 10 fastest growing company in percentage of growth with an initial AI team size of +250 over 3 months
Top 10 fastest growing company in percentage of growth with an initial AI team size of +250 over 3 ... [+] months
Source: Top 100 Fastest-Growing AI Teams, Vectice
The Benefits of Being an Early Adopter, And Why Its Important
The top 100 companies with the fastest-growing AI teams were selected based on their ability to scale their AI capabilities quickly and effectively. These companies are able to maintain a competitive advantage by being early adopters of AI technology. There are several benefits to being an early adopter of AI. This section will serve as a deep-dive into three of these benefits: a faster go-to-market time, improved decision-making, and increased customer loyalty.
Why is Microsoft, Kaiser Permanente, and Google Leading the Way?
The top three companies in terms of number of new AI hires are Microsoft, Kaiser Permanente, and Google. These companies are leading the way in AI because they have the resources to invest in AI, as well as a need to scale their AI capabilities quickly.
Microsoft is a leading provider of enterprise software and services. The company has a long history of investing in new technologies, and AI is no exception. Microsoft has been working on AI projects for many years, and has recently made a number of high-profile acquisitions in the space. For example, the company acquired Nuance for $19.7 billion acquisition, a company that provides speech recognition and conversational AI services. Nuance is particularly known for its deep learning voice transcription service, which is quite popular in the healthcare industry.
Kaiser Permanente is one of the largest healthcare providers in the United States. The company has been an early adopter of AI, and is using the technology to improve patient care. For example, Kaiser Permanente is using AI to identify patients at risk of developing certain diseases. This information is then used to make decisions about how to best treat those patients.
Google is a leading provider of consumer services. The company has also been an early adopter of AI, and is using the technology to improve its products and services, and has also acquired a number of companies in the space. For example, back in 2014, Google acquired DeepMind Technologies for $500 million. DeepMind is a leading provider of artificial intelligence technology.
The Way Forward: What Businesses Can Do To Join The Ranks
As enterprises race to build up their AI capabilities, it's important to remember that success with AI requires more than just a team of experts. Enterprises need to have the right data, the right infrastructure, and the right culture in place to make AI a success.
In terms of data, enterprises need to focus on collecting high-quality data that can be used to train AI models. This data needs to be well-labeled and clean, and it should be collected from a variety of sources.
In terms of infrastructure, enterprises need to have the computing power and storage capacity to support AI initiatives. They also need to have a well-defined governance model in place to ensure that AI is used responsibly.
Finally, in terms of culture, enterprises need to create an environment that is conducive to innovation. This includes promoting a data-driven mindset, encouraging creativity, and giving employees the freedom to experiment.
Only by investing in all of these areas will enterprises be able to build successful AI initiatives.
See the article here:
Top 100 Fastest-Growing AI Teams: Key Players, Exclusive Insights - Forbes
Posted in Ai
Comments Off on Top 100 Fastest-Growing AI Teams: Key Players, Exclusive Insights – Forbes
What are the AI and Data Science Skills that Leaders should Master? – Analytics Insight
Posted: at 11:39 pm
The people who practice Data science consolidate a scope of abilities to break down data to infer significant experiences.
Data science consolidates various fields, including measurements, logical techniques, computerized reasoning (simulated intelligence), and information examination, to arrange it into organized information. The people who practice Data science are called information researchers, and they consolidate a scope of abilities to break down data gathered from the web, cell phones, clients, sensors, and different sources to infer significant experiences. Data science envelops getting ready information for examination, including purging, amassing, and controlling the information to perform progressive information investigation. Scientific applications and information researchers can then survey the outcomes to uncover designs and empower business pioneers to draw informed experiences. Information researchers here play an important role in the entire process.
Artificial intelligence (simulated intelligence) alludes to the reproduction of human knowledge in machines that are customized to think like people and copy their activities. The term artificial intelligence may likewise be applied to any machine that displays qualities related to a human brain, for example, learning and critical thinking. The ideal quality of artificial intelligence is capacity to support and make moves having the most obvious opportunity with regards to accomplishing a particular objective. A subset of man-made brainpower is AI, which alludes to the idea that PC projects can naturally gain from and adjust to new information without being helped by people.
Machine learning is a part of information investigation that carries out an all-around planned logical model for a particular issue. In a laymans language, it is a branch cut out of man-made reasoning by which it is intended to gain from information, recognize designs, process examples, and pursue choices with limiting possibilities of mistakes. This is accomplished with negligible human intercession. Machine learning is the investigation of PC calculations that can work naturally through experience and by the utilization of data. AI calculations assemble a model in light of test information, known as preparing information, to pursue forecasts or choices without being expressly customized to do as such.
Python is the most ordered programming language utilized in DS/ AI and ML spaces. Its not difficult to utilize an open-source programming language with a wide client base and extremely itemized and continually refreshed documentation. One can program, script, picture, deductively register, and web scratch utilizing Python. Information design, seclusion, and Item Direction functionalities in Python are ideal for application improvement utilizing information science. Information researchers use Python for different cycles like making monetary models, web scratching information, making recreations, web advancement, information representation, and others. There is a very much tried bundle for practically any issue in Python.
R is one more programming language generally utilized in the information science industry. R is more helpful for information representation and going with choices utilizing graphical information. It is exceptionally simple to learn and is all around reported. There are many free web-based assets to learn R. R is utilized as a superb information science programming apparatus in numerous enterprises like medical care, internet business, banking, and others.
Practically every one of the significant ventures is moving from in-house servers to some type of cloud computing. Further, the applications are created as a bunch of free microservices that are conveyed and run on the cloud. Cloud computing permits associations to scale their IT system as per the requests and save both activity cost and capital speculation. All significant DS programs are intended to productively construct and run on the cloud. Central parts like Microsoft (Sky blue), Amazon (AWS), Google (GCP), and (IBM Cloud) have their own business DS contributions running over cloud arrangements.
Statistics, probability, and mathematics are the premise of Data Science, Artificial Intelligence, and Machine Learning. One cant plan hearty ML calculations without having solid groundwork in these three fields. It is inordinately difficult to extricate significant bits of knowledge from unstructured informational indexes. Measurement is an unquestionable requirement to do information arrangement and investigation. Information researchers ordinarily suggest one model from an assortment of models subsequent to running different factual tests on the aftereffect of each model to pick the best model. Also, many existing models like NaiveBayes or Backing Vector Machine (SVM) require knowledge of probability and mathematics to understand the fundamental condition.
Artificial intelligence is generally utilized to computerize the information examination frameworks and estimate all the more precisely. Information researchers can infer constant noteworthy bits of knowledge with computer-based intelligence that is all around upheld with information. The use of artificial intelligence has proactively made numerous manual positions out of date. Simulated intelligence observes wide applications in picture handling, normal language handling, PC vision, etc.
Share This ArticleDo the sharing thingy
About AuthorMore info about author
Analytics Insight is an influential platform dedicated to insights, trends, and opinions from the world of data-driven technologies. It monitors developments, recognition, and achievements made by Artificial Intelligence, Big Data and Analytics companies across the globe.
View original post here:
What are the AI and Data Science Skills that Leaders should Master? - Analytics Insight
Posted in Ai
Comments Off on What are the AI and Data Science Skills that Leaders should Master? – Analytics Insight
AI Competition With China Should Be Done the American Way – The National Interest Online
Posted: at 11:39 pm
With much of the national security community's attention now turned to long-term competition with China, the race to sustain the United States global leadership position in science and technology has taken center stage. One of the most frequently discussed aspects of this competition is artificial intelligence (AI), which is often mentioned as a potentially transformative technology.
Last year, the National Security Commission on Artificial Intelligence found that China is determined to surpass the United States in AI leadership, particularly in military applications. Top academics and industry thinkers agree that China is making significant headway toward this goal and is rapidly catching up with the United States research progress, leading a disgruntled former Pentagon official to suggest that the United States is already losing the race for AI superiority. Close analysis supports claims that China has indeed established areas of advantage, especially in data collection technology for surveillance and facial recognition.
This anxiety extends to the U.S. military enterprise as well. The militarys focus on AI has played out not only through organizational changes like the creation of the Joint Artificial Intelligence Center (JAIC) and a focus on AI in recent strategic documents, but also through a shift in its approach to warfighting. The central Department of Defense effort to modernize its methods of fightingslickly named Joint All-Domain Command and Control (JADC2)identifies "AI, machine learning, and predictive analytics" as the core enablers that will allow the future military to defeat any adversary at any time and in any place around the globe. As part of its JADC2 efforts, the Department of Defense is seeking to enforce compliance with new universal standards, including by building a "joint operating system" that will force a top-down centralization of military data storage and accessibility.
On paper, there is a clear appeal to this approach. It feels efficient, and it allows us to dismiss the messy, intensely human reality of war by envisioning a seamless, rapid flow of data, knowledge, and decisions through the United States warfighting machine, allowing it to dominate the enemy. However, like outfitting an 1890s office building with pneumatic tubes, connecting a military enterprise to today's rather immature conception of AI might seem modern, but it has the potential to be a liability down the road. Indeed, it might play to U.S. interests to let China take the lead. Watch as they connect every aspect of their military to centralized systems that sweep up data for use in current AI approachesapproaches built around big data machine learning and todays imperfect classification algorithmsand issue commands intended to destroy opposing forces without the need to trust their military operators. This science-fiction version of military AI could be a liability because, at the end of the day, military success has more to do with surprise than efficiency.
When considering the dynamics of competition between the United States and China, it is important to recognize that nations have asymmetric motivations and capabilities to develop and deploy AI. China's single-party political system still depends to some extent on the support of its people to maintain legitimacy. In the absence of democratic elections, surveilling a populace and using the data to measure sudden anomalies is a powerful tool to monitorand respond tothe sentiment of the people. Moreover, China has strong incentives to leverage technology to overcome the failings of past one-party systems. While it also has a capable and sometimes freewheeling private sector, strong state control helps China ensure the people hew to its national mandates.
Chinese president Xi Jinping has voiced persistent doubts about the ability of Chinese military officers to make combat leadership decisions, asserting that some officers cannot understand higher authorities intentions or make operational decisions under pressure. Even though Chinas military enterprise has made remarkable progress in building systems from stealth drones to hypersonic missiles, this foundational aspect of combat remains a key issue. This helps explain the Chinese militarys fascination with AI, which would seem to present a way to paper over leadership concerns, with machines dominating the battlefield in the same way that Chinese AI can beat the best video game players in the world.
Meanwhile, the United States draws from a highly decentralized system that supports and drives a vibrant commercial sector. American industry has harnessed the seemingly magical power of AI to drive online advertising. And in any given year, there are dozens of startups figuring out how to leverage machine learning to build and monetize better digital mousetraps. At the same time, the U.S. military struggles to operationalize AI. In the four years since the JAIC was founded, it hasn't made visible progress toward delivering access to troves of military data for users of its joint common foundation toolkit. That's no easy task, given the context-specific nature of data, complex security environment, and distinct service cultures that make sharing frustratingly difficult. However, perhaps this isnt the catastrophe that corporate executives would like us to think.
It appears that leaders are distracted by powerful examples of AI applications that are not representative of actual warfare. For China, these examples include domestic surveillance efforts designed to ensure internal harmony. On the other hand, American leaders are drawn to the prevalence of AI in commercial applications, especially the applicability of AI for improving operations efficiency, corporate decision-making, and crossing information technology stovepipes. But despite a superficial similarity, the rule-bound world of video games has little to do with the horrors of actual military conflict.
The shortfalls of todays AI technologies are clear. Machine learning technologys dependence on black-box processing of historical data creates deep and systemic vulnerabilities. For instance, there are several well-publicized examples of self-driving cars misidentifying road signs due to subtle perturbations that humans cant see. Voice commands are no more secure, as audio processing systems can also be fooled with faint noises that dont sound like speech to humans. This is to say nothing of the threat of adversaries intruding into centralized AI command and control systems or the risks from communications interruption between algorithms and actions. Even when a computing environment is secure and algorithms arent fooled by hidden features, there is no way to know if adversarial data was accidentally collected a decade ago and used to improperly train a system. While research is underway to help mitigate these issues, a better approach is to leverage architectures that dont present attractive centralized targets, dont vacuum up data from all sources, and dont force military users into common standards or platforms.
Conflict is an intrinsically human phenomenon, and people will always be a part of military decision-making. With the ubiquity of computing and communication, the real question for the U.S. military is not whether AI will be used in military applications, but how to architect it. The United States could choose to trust hierarchy, using centralized mandates and common standards and platforms to accelerate decisions, ultimately imitatingeven chasingour strategic competitor. Alternatively, it could embrace heterogeneity and pursue a federated model with an array of competing projects scattered across the Department of Defense. This latter model delegates trust to decisionmakers throughout the Pentagon, accepting the intrinsic risk it invites. Crucially, it also plays to the United States strengths.
In this alternative model, leaders would seek to promote the sharing of concepts and technology by centrally funding infrastructure like code repositories, but they wouldnt seek to converge projects or standards. They would instead look to amplify success where it emerges.
The result might resemble something of a zero trust architecture for decision-making. In cybersecurity, a zero trust architecture recognizes that it is nearly impossible to make any system invulnerable to compromise and instead focuses on minimizing the risk and collateral damage of a successful attack. If any system is compromised, the effects cant cascade and collapse the entire enterprise. While this sounds appealing, it comes with tradeoffs, namely that it forgoes the efficiency gains that centralized AI systems bring. Businesses that adopt modern tools like Palantir or C3.AI seek to place all of their enterprise data under a single pane of glass to maximize the operational payoff of decision support algorithms. Instead, a federated system offers the advantage of surpriseone can never be quite sure what individuals will dofor better or worse. But this unpredictability is resilient and inflicts costs on the adversary. Moreover, the United States is uniquely positioned to take advantage of a federated model.
The most valuable long-term potential of AI is not displacing humans from their tasks, automating decision-making, or modernizing weapon systems. Instead, over the decades to come, AI will permit us to richly recombine intelligencein both human and machine formsaround collective problem-solving. We are only in the early stages of this journey. However, strategic advantages will come from harnessing the United States unique strengths, including the messy, cantankerous character of its system of governance and national history.
Melissa Flagg is a visiting fellow at the Perry World House, a fellow at the Acquisition Innovation Research Center, and an advisor to the Andrew W. Marshall Foundation. She is also a former Deputy Assistant Secretary of Defense for Research with over fifteen years in defense research and engineering. She can be found on Twitter @flaggster73
Read more:
AI Competition With China Should Be Done the American Way - The National Interest Online
Posted in Ai
Comments Off on AI Competition With China Should Be Done the American Way – The National Interest Online
FDA Issues Advisory on Use of AI and Machine Learning for Large Vessel Occlusion in the Brain – Diagnostic Imaging
Posted: at 11:39 pm
Suggesting that some radiologists may not be aware of the intended use of computer-aided triage and notification (CADt) devices, the Food and Drug Administration (FDA) has issued an advisory on the use of the imaging software for patients with suspected large vessel occlusion (LVO) in the brain.
Emphasizing proper use of CADt software, the FDA notes these devices are not intended to substitute for diagnostic assessment by radiologists. While CADt devices can help flag and prioritize brain imaging with findings that are suspicious for LVO, the advisory points out that an LVO, a common cause of acute ischemic strokes, may still be present even if it is not flagged by the CADt imaging software.
If there is any potential over-reliance on CADt software, Vivek Bansal, MD said it may stem from a team of health-care providers striving to do the right thing for the patient under tight time constraints. While interventionalists, neurosurgeons and neurologists all have strong knowledge of brain vessels, there may be different levels of experience, according to Dr. Bansal, the national subspecialty lead for neuroradiology at Radiology Partners. He added that while these specialists look closely at images they take in the operating suite, they may not look at the actual CT images to the same level.
In regard to the imaging, Dr. Bansal said one may be looking at tiny branching vessels that are diving up and down into different slices of the images, and you have to scroll up and down to really trace them out vessel by vessel. This can be challenging and particularly hard to do on a smartphone in a brightly lit room, pointed out Dr. Bansal.
The clock is ticking, and time is brain. We are trying to race against the clock because every minute we take to arrive at a diagnosis, more brain cells may be dying (if the patient has a clot). The quicker we can get them to a diagnosis and the patient gets to a cath lab, the better the outcomes for the patient. I think that is the biggest challenge: trying to do something that is very meticulous in a very small amount of time, explained Dr. Bansal.
The FDA advisory also maintained that it is important to have awareness of the design capabilities of different CADt devices, many of which have artificial intelligence (AI) or machine learning technology, For example, the FDA cautioned that LVO CADt devices may not assess all intracranial vessels. Dr. Bansal said this is an important distinction with AI tools.
While some AI tools are very good at looking at an M1 occlusion, which is the proximal part of the middle cerebral artery, the newer AI tools are capable of looking at M2 occlusions with proximal anterior cerebral artery (ACA) and posterior cerebral artery (PCA) occlusions. All of these things are important in terms of patient care, maintained Dr. Bansal, who is affiliated with the East Houston Pathology Group in Texas.
Dr. Bansal said the key is understanding the role of AI-enabled devices and their value in triaging cases.
At any given moment, I might have 40 stat exams on my list. Im cranking through them as fast as I can but if AI tools are saying 'Hey, look at this one next, whether it is a potential large vessel occlusion or brain bleed, that is very helpful, suggested Dr. Bansal. Where we are at right now, I think that the only way we can look at AI is to look at it as a triaging tool.
Originally posted here:
Posted in Ai
Comments Off on FDA Issues Advisory on Use of AI and Machine Learning for Large Vessel Occlusion in the Brain – Diagnostic Imaging
AI-Powered Farming is the Future of Food Production – Analytics Insight
Posted: at 11:39 pm
AI-powered farming has enhanced the food production process to a great extent
The world population will soon touch 8 million and that comes with farming challenges. Naturally, with the human population, food production should also increase and old agricultural methods might not do the job. Thanks to artificial intelligence, we can now reinvent farming with AI-powered vertical farming.
AI-powered vertical farming opens up many opportunities for farms in urban and industrial areas. It can produce crops in all seasons and in cost-controlled ways in any type of space. Automation through artificial intelligence, machine learning, and robotics is a viable option to bring down the cost of labor and other operations. This is a boon for regions where the farmers are reaching old age.
Seeing these advantages, Asian cities in China and Japan are the first ones to adopt this technology with the help of many indoor vertical farms. This method allows smaller farmers as well big farming companies to grow yield according to the customers demand. This is a lifestyle-changing new-age farming method.
Share This ArticleDo the sharing thingy
About AuthorMore info about author
Analytics Insight is an influential platform dedicated to insights, trends, and opinions from the world of data-driven technologies. It monitors developments, recognition, and achievements made by Artificial Intelligence, Big Data and Analytics companies across the globe.
Original post:
AI-Powered Farming is the Future of Food Production - Analytics Insight
Posted in Ai
Comments Off on AI-Powered Farming is the Future of Food Production – Analytics Insight
We work on chips that are serving the AI and supercomputer markets. – CTech
Posted: at 11:39 pm
Putting more and more performance capabilities on a semiconductor chip is a challenge, says Sandy Saper, head of HW, ASIC & FPGA development at Ready. But the ability to put more on chips through time has brought us to todays technology, he shares. Developing a chip involves a variety of disciplines including architecture, design, and verification. Ensuring the chips work properly is critical because bugs can be very costly and time consuming to fix, he says. In addition to reaching the performance requirements, an engineer also has to think about how to meet them more efficiently, Saper explains. Ready has teams working on hardware as well as software, and Sapers hardware team is now developing chips for artificial intelligence and supercomputers. He says that looking forward, engineers need to think about how to have more than one chip communicate as the demand for higher performance grows.
I fell in love with what's happening with chip design recently. I'd love to hear about your career journey and how you got into chip design.
I've been 30 years in the development of chips. Now I manage a hardware team that designs and develops chips for the semiconductor industry. I work at Ready, which is a private company in Israel. Half of the company is hardware, and half is software.
I was always interested in electricity as a teenager. When I signed up for university, I said, "Let's go into electricity." I had no idea what chips were then. But during my studies, I started to learn about all the amazing things electricity does. Then I started to learn about chip design. The two of us clicked, me and chip design. Ever since then, I've been developing chips from the start.
The interest in electricity and chip design, did it stem mainly from the academic side or the challenge side? What did you understand about where the world was going and the role chip design was going to play?
I was less into the academics and more into the hands-on and do it. You have an idea, get it done. I really didn't think back then, in my early 20s, where this was going. Back then, the technologies were in their infancy. When I'd finished university, there were already low-end microcontrollers for simple tasks, for computers. Just then, the first desktops were being produced.
Every year or two, there's another shrink where you can combine more and more in one chip. It's Moore's Law, and it's been going on ever since. We've been cramming more and more functions in every chip, which has enabled us to get where we are today. Cramming more and more performance in a single chip is more and more challenging for the design engineers, architects, and electronic engineers.
Why is chip design still relevant today? What advancements do we still have to make?
Artificial intelligence is just starting. Supercomputing is just starting. We work on chips that are serving the artificial intelligence and supercomputer markets. Without the supercomputers we have today, we could not have developed the vaccines that we are using for helping us cope with COVID-19. Supercomputers need super chips, and that's what we do.
2
Sandy Saper, head of HW, ASIC & FPGA Development at Ready
(Photo: Idan Canfi)
Tell me about Ready, about where you're at today.
Ready is a company that has hardware and software. When you want to develop a chip, you need to know you come with requirements. It starts off with understanding your requirements to meet the performance, the price, and the power that you need.
There are a lot of disciplines to make a chip that meets the requirements. It starts off with the architecture. How do I even put all these blocks together so that they will function with each other? What's going to be in hardware and what is going to be in software is very critical. We in Ready can decide for the customer what can be in hardware and what can be in software. The architecture team decides how we are going to put this chip together to meet the performance and can work on it for between a month and half a year to make sure that the end product is going to meet what you need.
Then there's a design team, which has the expertise on how to design each actual block to meet the performance. Then after they have finished the design, it needs to be fully verified. We have a high quality verification team to make sure that what we have designed is going to fully work and there are no bugs. We cannot allow bugs. Then we have to change all the code that was written and translate that into the physical chip. Then we have to validate that the physical meets the requirements.
Every stage takes a few months, depending on the size of the chip. Then we have to validate that what we have done is exact. Then we ship it off to the factory. It costs millions of dollars to just prepare the chip for the factory, for the fab. The fabs are the highest technology factories in the world. Super clean, very high quality.
I cannot ship a chip that has a bug because it's going to cost me a million dollars to fix it. The time it takes to fabricate a high-end chip in the fab can take three to four months, and the time to market is critical. So if, God forbid, you have a bug in your chip, you will only detect that when you get the silicon after four or five months. At Ready, we have an excellent team that does the verification, the design, the architecture, and the final product.
Then after it's gone through all the stages in the factory, it needs to be packaged. We receive the chip and we validate, before we ship it to the customer, that it meets all the performance that we designed it for. That can also take a few weeks to validate that everything is perfect and that you don't have a field failure.
Coming from the software world, I was taught that as long as the bug is not huge, ship it and then roll out an update once you get some feedback. But it sounds like the detail that you're dealing with is an incredible amount. So I'm imagining that if I thought my software was complicated and had bugs, designing chips is just a little bit more detailed, right?
The code is very complicated to meet the requirements. You need a high-quality team and managers to manage that everything is fitting each other, everything comes on time, everything is working together till it works, until we ship it off to the factory.
What does a chip designer do when they get to the office in the morning?
After having the first cup of coffee, interaction with your co-workers is very important. To understand the requirements and how do I implement them so that it's efficient, it will function, and there's minimum chance of a bug. You go to your work, understand what the functionality is that I have to implement in this block. Or if it's a verification engineer, how am I going to write the code that is going to verify that the design is really clean?
We have tools that we use, and a lot of automation is in the process. We think part of the engineer's job is also to see how we can do this more efficiently. Every generation gets more and more complicated because we can put in more and more. So we're always thinking, "How can I do this more efficiently next time? Or now?" Which is very important since the demand for resources is high. The more efficient you are, the better.
As we're transitioning to the future, where are semiconductors positioned with the younger generation as opposed to software?
One basic axiom is that "software has to run on hardware." I always tell my software colleagues, "Without us, you couldn't develop anything." I think that we are going to have to think about supercomputing and how much we can really cram into one chip. How do I think of supercomputing? In other words, supporting multiple chips. Communication between the chips is very critical, which is also being worked on. That's the way I see it going forward. We now have to think about how we have multiple chips in the system, how are they going to communicate with one another so that we can meet the higher demand for performance.
2
Michael Matias
(Photo: Courtesy)
Michael Matias, Forbes 30 Under 30, is the author of Age is Only an Int: Lessons I Learned as a Young Entrepreneur. He studies Artificial Intelligence at Stanford University, is a Venture Partner at J-Ventures and was an engineer at Hippo Insurance. Matias previously served as an officer in the 8200 unit. 20MinuteLeaders is a tech entrepreneurship interview series featuring one-on-one interviews with fascinating founders, innovators and thought leaders sharing their journeys and experiences.
Contributing editors: Michael Matias, Megan Ryan
More here:
We work on chips that are serving the AI and supercomputer markets. - CTech
Posted in Ai
Comments Off on We work on chips that are serving the AI and supercomputer markets. – CTech
Boosting US Fighter Jets NASA Research Applies Artificial Intelligence To Hypersonic Engine Simulations – EurAsian Times
Posted: at 11:39 pm
Researchers from the National Aeronautics and Space Administration (NASA) have teamed up with the US Department of Energys Argonne National Laboratory (ANL) to develop artificial intelligence (AI) to enhance the speed of simulations to study the behavior of air surrounding supersonic and hypersonic aircraft engines.
Fighter jets such as F-15s regularly exceed Mach 2 two times the speed of sound during the flight which is known as supersonic level. On a hypersonic flight which is Mach 5 and beyond, an aircraft flies faster than 3,000 miles per hour.
Hypersonic speeds have been made possible since the 1950s by the propulsions systems used for rockets however, engineers and scientists are working on advanced jet engine designs to make the hypersonic flight much less expensive than a rocket launch and more common such as for commercial flight, space exploration, and national defense purposes.
The newly published paper by a team of researchers from NASA and ANL details the machine learning techniques to reduce the memory and cost required to conduct computational fluid dynamics (CFD) simulations related to fuel combustion at supersonic and hypersonic speeds.
The paper was previously presented at the American Institute of Aeronautics and Astronautics SciTech Forum in January.
Before building and testing any aircraft, CFD simulations are used to determine how the various forces surrounding an aircraft in flight will interact with it. CFD consists of numerical expressions representing the behavior of fluids such as air and water.
When an aircraft breaks the sound barrier which involves traveling at speeds surpassing that of sound, it generates a shock wave which is a disturbance that makes the air around it hotter, denser, and higher in pressure causing it to behave very violently.
At hypersonic speeds, the air friction created is so strong that it could melt parts of a conventional commercial plane.
The air-breathing jet engines draw in oxygen to burn fuel as they fly so the CFD simulations have to account for major changes in the behavior of air, not only surrounding the plane but also as it moves through the engine and interacts with fuel.
While a conventional plane has fan blades to push the air along, in planes approaching Mach 3 and above speeds, their movement itself compresses the air. These aircraft designs, known as scramjets, are important to attain fuel efficiency levels that rocket propulsion cannot.
So, when it comes to CFD simulations on an aircraft capable of breaking the sound barrier, all the above factors add new levels of complexity to an already computationally intense exercise.
Because the chemistry and turbulence interactions are so complex in these engines, scientists have needed to develop advanced combustion models and CFD codes to accurately and efficiently describe the combustion physics, said Sibendu Som, a study co-author and interim center director of Argonnes Center for Advanced Propulsion and Power Research.
NASA has a hypersonic CFD code known as VULCAN-CFD which is specially meant for simulating the behavior of combustions in such a volatile environment.
This code uses something called flamelet tables where each flamelet is a small unit of a flame within the entire combustion model. This data table consists of different snapshots of burning fuel in one huge collection which takes up a large amount of computer memory to process.
Therefore, researchers at NASA and the ANL are exploring the use of AI to simplify these CFD simulations by reducing the intensive memory requirements and computational costs, to increase the pace of development of barrier-breaking aircraft.
Computational Scientists at ANL used a flamelet table generated by Argonne-developed software to train an artificial neural network that could be applied to NASAs VULCAN-CFD code. The AI used values from the flamelet table to learn shortcuts about determining the combustion behavior in supersonic engine environments.
The partnership has enhanced the capability of our in-house VULCAN-CFD tool by leveraging the research efforts of Argonne, allowing us to analyze fuel combustion characteristics at a much-reduced cost, said Robert Baurle, a research scientist at NASA Langley Research Center.
Countries across the world are racing to achieve hypersonic flight capability and an essential part of this race are simulation experiments where there is huge potential for the application of emerging tech such as AI and machine learning (ML).
Last month, according to a recent EurAsian Times report, Chinese researchers led by a top-level advisor to the Chinese military on hypersonic weapon technology, claimed a significant breakthrough in an AI system that can design new hypersonic vehicles autonomously.
Moreover, in February a Chinese space company called Space Transportation announced plans for tests beginning next year on a hypersonic plane capable of doing 7,000 miles per hour.
The company claimed that their plane could fly from Beijing to New York in an hour.
Read more:
Posted in Ai
Comments Off on Boosting US Fighter Jets NASA Research Applies Artificial Intelligence To Hypersonic Engine Simulations – EurAsian Times