The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Ai
Researchers lay the groundwork for an AI hive mind – The Next Web
Posted: September 16, 2021 at 6:46 am
Intels AI division is one of the unsung heroes of the modern machine-learning movement. Its talented researchers have advanced the state of AI chips, neuromorphic computing, and deep learning. And now theyre turning their sights on the unholy grail of AI: the hive mind.
Okay, that might be a tad dramatic. But every great science fiction horror story has to start somewhere.
And Intels amazing advances in the area of multiagent evolutionary reinforcement learning (MERL) could make a great origin story for the Borg a sentient AI that assimilates organic species into its hive mind, from Star Trek.
MERL, aside from being a great name for a fiddle player, is Intels new method for teaching machines how to collaborate.
Per an Intel press release:
Weve developed MERL, a scalable, data-efficient method for training a team of agents to jointly solve a coordination task. A set of agents is represented as a multi-headed neural network with a common trunk. We split the learning objective into two optimization processes that operate simultaneously.
The new system is complex and involves novel machine-learning techniques, but the basic ideas behind it are actually fairly intuitive.
AI systems dont have what the French call une raison dexister. In order for a machine to do something, it needs to be told what to do.
But, often, we want AI systems to do things without being told what to do. The whole point of a machine learning paradigm is to get the machine to figure things out for itself.
However, you still need to make the AI learn the stuff you want it to and forget everything else.
For example, if youre trying to teach a robot to walk, you want it to remember how to move its legs in tandem and forget about trying to solve the problem by hopping on one foot.
This is accomplished through reinforcement learning, the RL in MERL. Researchers tweak the AIs training paradigm to ensure its rewarded whenever it accomplishes a goal, thus keeping the machine on task.
If you think about AI in the traditional sense, it works a lot like a single agent (basically, one robot brain) trying to solve a giant problem on its own.
So, for an AI brain responsible for making a robot walk, the AI has to figure out balance, kinetic energy, resistance, and what the exact limits of its physical parts are. This is not only time-consuming often requiring hundreds of millions of iterative attempts but its also expensive.
Intels MERL system allows multiple agents (more than one AI brain) to attack a larger problem by breaking it down into individual tasks that can then be handled by individual agents. The agents collaborate in order to speed up learning across each task. Once the individual agents train up on their tasks, a control agent utilizes the sum of training to organize a method by which the entire goal is accomplished in our example, making a robot walk.
If this system was people instead of AI, itd be like the hit 1980s cartoon Voltron, where individual pilots fly individual vehicles but they come together to form a giant robot thats more powerful than the sum of its parts.
But since were talking about AI, its probably more helpful to view it more like the aforementioned Borg. Instead of a single AI brain controlling all the action, MERL gives AI the ability to form a sort of brain network.
One might even be tempted to call it a non-sentient hive mind.
Read the rest here:
Researchers lay the groundwork for an AI hive mind - The Next Web
Posted in Ai
Comments Off on Researchers lay the groundwork for an AI hive mind – The Next Web
Cerence to Present at the Evercore ISI Autotech & AI Forum – GlobeNewswire
Posted: at 6:46 am
BURLINGTON, Mass., Sept. 15, 2021 (GLOBE NEWSWIRE) -- Cerence Inc. (NASDAQ: CRNC), AI for a world in motion, announced today that it will be presenting at the Evercore ISI Autotech & AI Forum on Tuesday, September 21, 2021, at 2:00 p.m. Eastern Time. The format for the conference will be a fireside chat featuring Mark Gallenberger, Cerence CFO, and Rich Yerganian, Vice President of Investor Relations.
The event will be webcast and can be accessed in the Events tab under the Investors section of the Companys website at https://www.cerence.com/investors/events-and-resources
The webcast replay will be available on the Companys website at http://www.cerence.com.
About Cerence Inc.Cerence (NASDAQ: CRNC) is the global industry leader in creating unique, moving experiences for the mobility world. As an innovation partner to the worlds leading automakers and mobility OEMs, it is helping advance the future of connected mobility through intuitive, powerful interaction between humans and their cars, two-wheelers, and even elevators, connecting consumers digital lives to their daily journeys no matter where they are. Cerences track record is built on more than 20 years of knowledge and nearly 400 million cars shipped with Cerence technology. Whether its connected cars, autonomous driving, e-vehicles, or buildings, Cerence is mapping the road ahead. For more information, visit http://www.cerence.com.
Investor Contact InformationRich YerganianVice President of Investor RelationsCerence Inc.Tel: 617-987-4799Email: richard.yerganian@cerence.com
Read the original post:
Cerence to Present at the Evercore ISI Autotech & AI Forum - GlobeNewswire
Posted in Ai
Comments Off on Cerence to Present at the Evercore ISI Autotech & AI Forum – GlobeNewswire
Artificial intelligence a new portal to promote global cooperation launched with eight international organisations – Council of Europe
Posted: at 6:46 am
On 14 September 2021, eight international organisations joined forces to launch a newportalpromotingglobal cooperation on artificial intelligence (AI). The portal is a one-stop shop for data, research findings andgood practices in AI policy.
The objective of the portal is to help policymakers and the wider public navigate the international AI governance landscape. It provides access to the necessary tools and information, such as projects, research and reports to promote trustworthy and responsible AI that is aligned with human rights at the global, national and local level.
Key partners in this joint effort include the Council of Europe, the European Commission, the European Union Agency for Fundamental Rights, the Inter-American Development Bank, the Organisation for Economic Co-operation and Development (OECD), the United Nations (UN), the United Nations Educational, Scientific and Cultural Organization (UNESCO), and the World Bank Group.
Globalpolicy.AI website
See more here:
Posted in Ai
Comments Off on Artificial intelligence a new portal to promote global cooperation launched with eight international organisations – Council of Europe
AI trial for prostate cancer expanded across five hospitals – Digital Health
Posted: at 6:46 am
Five new hospitals have joined a trial of artificial intelligence aimed at spotting prostate cancer quicker.
Each year more than 40,000 men are diagnosed with prostate cancer, making it the most common cancer among men. Health tech company Ibex Medical Analytics are hoping to speed up the process of getting diagnosed to enable faster access to treatment.
Its artificial intelligence (AI) Galen Prostate technology aims to reduce diagnostic errors by using clinical-grade solutions to help pathologists detect and grade cancer.
Imperial College Healthcare, University College London, University Hospital of Coventry and Warwickshire, Chelsea and Westminster Hospital and University Hospitals Southampton will be trialling the technology with the potential for it to be adopted more widely across the health system.
Clinicians will compare the results of the AI analysis to current diagnosis methods, where biopsies are reviewed by a pathologist.
Professor Hashim Ahmed, chair of urology at Imperial College London, said: We strongly believe that AI has the potential to enhance both quality and efficiency, which is of paramount importance as we focus on putting every patient on the path to recovery.
Ibexs technology has demonstrated its robustness on several studies abroad and so we look forward to seeing its performance and utility first-hand in the NHS.
The trial is funded as part of the 140m NHSX AI awards. The company one funding in phase three of the AI Awards, working with Imperial College London to trial the technology.
Now, researchers from across the hospitals will put the technology to the test in detecting and grading cancer in 600 prostate biopsies over 14 months.
Joseph Mossel, chief executive and co-founder of Ibex Medical Analytics, said: This funding acknowledges the potential of AI in pathology practice and the scientific evidence and clinical utility we have demonstrated to date.
Matthew Gould, chief executive of NHSX, added: We are currently caught between having too few pathologists and rising demand for biopsies. This technology could help, and give thousands of men with prostate cancer faster, more accurate diagnoses.
It is a prime example of how AI can help clinicians improve care for patients as we recover from the pandemic.
See original here:
AI trial for prostate cancer expanded across five hospitals - Digital Health
Posted in Ai
Comments Off on AI trial for prostate cancer expanded across five hospitals – Digital Health
New O’Reilly Report Reveals Nearly Two-Thirds of Data & AI Professionals Prioritize Job Training in Hopes of Salary Increase – Business Wire
Posted: at 6:46 am
BOSTON--(BUSINESS WIRE)--OReilly, the premier source for insight-driven learning on technology and business, today announced the results of its 2021 Data/AI Salary Survey, which revealed that 64% of respondents took part in training or obtained new certifications in the past year to build upon their professional skills. The survey also found that 61% of respondents participated in training or earned certifications to solicit a salary increase or promotion. Despite this, the average change in compensation over the last three years was $9,252an increase of just 2.25% annually.
Overall, data and AI professionals have a clear desire to learn, with 91% of those surveyed reporting that theyre interested in learning new skills or improving existing skills. The survey revealed that one-third of professionals have dedicated more than 100 hours to training and development, which ultimately led to an average salary increase of $11,000. However, data and AI professionals who participated in one to 19 hours of training only saw an average salary increase of $7,100.
Data and AI professionals are among the most driven employees when it comes to upskilling. Given the shortage of qualified employees in fields like data science, machine learning, and AI, companies that are serious about building out their workforces must invest in learning and training to grow this talent internally, said Laura Baldwin, president of OReilly. With such a wealth of knowledgeable talent and a recovering global economy hungry to fill tech roles in digital work environments, theres never been a better time to invest in employee learning and reskilling.
The 2021 Data/AI Salary Survey, which polled 3,136 respondents, gathered salary findings based on gender, education level, job title, and which tools and platforms they work on daily. Of note, womens salaries were significantly lower than mens salaries, equating to 84% of the average salary for men. This salary differential held regardless of education or job title. For example, at the executive level, the average salary for women was $163,000 versus $205,000 for men (a 20% difference).
Looking at salary by programming language, the survey found that professionals who use Rust have the highest average salary (over $180,000), followed by Go ($179,000), and Scala ($178,000). While Python was most dominantly put to work among survey respondents, professionals who reported using this language earned around $150,000. When comparing salary by tool and platform, respondents who used the most popular machine learning tools saw the following average salaries: PyTorch ($166,000), TensorFlow ($164,000), and scikit- learn ($157,000). The highest salaries were associated with H2O ($183,000), KNIME ($180,000), Spark NLP ($179,000), and Spark MLlib ($175,000).
Additional findings from the 2021 Data/AI Salary Survey include:
Our survey reveals just how dedicated data and AI professionals are to advancing their careers through skill development and training. Getting L&D right is crucial for companies to retain and attract top talent in this hot job market, said Mike Loukides, report author and vice president of content at OReilly.
The full report and survey results are available here: https://get.oreilly.com/ind_2021-data-ai-salary-survey.html.
To learn more about OReillys learning content, training courses, certifications, and virtual events, visit http://www.oreilly.com.
About OReilly
For 40 years, OReilly has provided technology and business training, knowledge, and insight to help companies succeed. Our unique network of experts and innovators share their knowledge and expertise through the companys SaaS-based training and learning platform. OReilly delivers highly topical and comprehensive technology and business learning solutions to millions of users across enterprise, consumer, and university channels. For more information, visit http://www.oreilly.com.
See more here:
Posted in Ai
Comments Off on New O’Reilly Report Reveals Nearly Two-Thirds of Data & AI Professionals Prioritize Job Training in Hopes of Salary Increase – Business Wire
Artificial Intelligence & Autopilot | Tesla
Posted: September 12, 2021 at 9:29 am
Hardware Hardware
Build silicon chips that power our full self-driving software from the ground up, taking every small architectural and micro-architectural improvement into account while pushing hard to squeeze maximum silicon performance-per-watt. Perform floor-planning, timing and power analyses on the design. Write robust, randomized tests and scoreboards to verify functionality and performance. Implement compilers and drivers to program and communicate with the chip, with a strong focus on performance optimization and power savings. Finally, validate the silicon chip and bring it to mass production.
Apply cutting-edge research to train deep neural networks on problems ranging from perception to control. Our per-camera networks analyze raw images to perform semantic segmentation, object detection and monocular depth estimation. Our birds-eye-view networks take video from all cameras to output the road layout, static infrastructure and 3D objects directly in the top-down view. Our networks learn from the most complicated and diverse scenarios in the world, iteratively sourced from our fleet of nearly 1M vehicles in real time. A full build of Autopilot neural networks involves 48 networks that take 70,000 GPU hours to train . Together, they output 1,000 distinct tensors (predictions) at each timestep.
Develop the core algorithms that drive the car by creating a high-fidelity representation of the world and planning trajectories in that space. In order to train the neural networks to predict such representations, algorithmically create accurate and large-scale ground truth data by combining information from the car's sensors across space and time. Use state-of-the-art techniques to build a robust planning and decision-making system that operates in complicated real-world situations under uncertainty. Evaluate your algorithms at the scale of the entire Tesla fleet.
Throughput, latency, correctness and determinism are the main metrics we optimize our code for. Build the Autopilot software foundations up from the lowest levels of the stack, tightly integrating with our custom hardware. Implement super-reliable bootloaders with support for over-the-air updates and bring up customized Linux kernels. Write fast, memory-efficient low-level code to capture high-frequency, high-volume data from our sensors, and to share it with multiple consumer processes without impacting central memory access latency or starving critical functional code from CPU cycles. Squeeze and pipeline compute across a variety of hardware processing units, distributed across multiple system-on-chips.
Build open- and closed-loop, hardware-in-the-loop evaluation tools and infrastructure at scale, to accelerate the pace of innovation, track performance improvements and prevent regressions. Leverage anonymized characteristic clips from our fleet and integrate them into large suites of test cases. Write code simulating our real-world environment, producing highly realistic graphics and other sensor data that feed our Autopilot software for live debugging or automated testing.
Develop the next generation of automation, including a general purpose, bi-pedal, humanoid robot capable of performing tasks that are unsafe, repetitive or boring. Were seeking mechanical, electrical, controls and software engineers to help us leverage our AI expertise beyond our vehicle fleet.
Thank you for your submission, we'll be in touch!
Sorry, we are not able to process your request at this time, please try again later.
View post:
Posted in Ai
Comments Off on Artificial Intelligence & Autopilot | Tesla
The Top Five Trends In AI: How To Prepare For AI Success – Forbes
Posted: at 9:29 am
The Top 5 Trends in AI
The strategic importance of AI is growing at an accelerating pace. Many companies are reaping the rewards of AI now and will increase their investments as a result.
Every board member and every senior executive must understand the key trends in AI that will impact their businesses.
The Top 5 Trends in AI are as follows:
1)Increasing investments
2)Rapid response
3)Risk management
4)Job changes
5)Organizational transformation
According to ResearchAndMarkets.com, the global artificial intelligence market is expected to grow from $40 billion in 2020 to $51 billion in 2021 at a compound annual growth rate (CAGR) of 28%. The market is expected to reach $171 billion in 2025 at a CAGR of 35%.
Companies that are AI leaders are building an AI flywheel that will enable them to strengthen the lead they already have over their competitors. The flywheel effect comes primarily from AI systems that perform well and then produce more data, helping the system continually improve its performance. Eventually, a competitor will never be able to catch up. (See The AI Threat: Winner-Takes-All)
Another flywheel effect comes from the ability to attract AI talent by building an organization that can enable growth opportunities in AI for that talent.
The companies that have fully embraced AI are focused primarily on:
Creating better customer experiences
Improving decision-making and
Innovating on products and services
(See McKinsey: Winning Companies Are Increasing Their Investment In AI During Covid-19. What Do They Know That You Dont?)
COVID-19 prompted many companies to accelerate their investments in AI. According to a PWC survey, AI is used in strategic decisions around workforce planning, supply chain resilience, scenario planning, and demand projections.
Most companies engage in an annual scenario/strategic planning process. AI can make the strategic planning process an ongoing one. By creating AI models, the strategic plan can be continually updated based on changes in supply, demand, operations, competitive moves, and more.
AI can help sense new threats and opportunities and help a company move away from historical reporting to insightful forecasting.
Companies want to address AI risks but are slow to take action. The top issues include improving privacy, explainability, bias reduction, and improving defenses against cyber threats.
AI lives on data, and data privacy and consumer protection are paramount. (See Consumer Protection and AI7 Expert Tips To Stay Out Of Trouble)
AI is sometimes a black box. In some cases, you need to know how and why AI makes certain decisions. In some cases, it's not that important. However, you need to know when it's essential and when it's not.
AI can have many different types of bias risks. The board and senior management need to understand how to mitigate these risks and ensure that action is taken. (See How AI Can Go Terribly Wrong: 5 Biases That Create Failure)
Cyber threats can become more serious when state actors use AI. Its an arms race. How are you playing the cyber security game with AI? (See If Microsoft Can Be Hacked, What About Your Company? How AI Is Transforming Cybersecurity).
AI will replace some jobs. Most importantly, however, is that AI will replace many tasks. Suppose a job consists of many tasks that can be done more effectively or efficiently than a human. In that case, that job is likely to be replaced. (See Covid Has Changed How We Work. With The Rise Of AI, Is Your Job At Risk?)
If a job has some tasks that are better done by AI, and some that a human does better, then that human's work contribution can be augmented and improved by AI.
In some cases, new jobs will be created related to the development, management, and ongoing maintenance of AI-based systems. Creating these jobs may be challenging, and each company needs to determine the best approach and the type of people they will need to succeed.
Importantly, workers at all levels will need to understand the implications of AI on their jobs. Some will need to be trained in entirely new skills, and some will need to learn that AI is not a threat but an opportunity. All will be concerned about how AI will impact their future.
Management will need to over-communicate the impacts of AI on jobs and on the organization as a whole.
For a company to fully benefit from AI, it requires a cultural shift. The organization needs to become data-driven. It needs to learn how to share data, subject matter expertise, and AI models across the organization, breaking down traditional silos.
Automating routine tasks is important and is an excellent way to get a quick return on investment but isn't a top priority for companies that have adopted AI.
According to PWC, the top-ranked AI apps for 2021 include:
Managing risk, fraud, and cybersecurity threats
Improving AI ethics, explainability, and bias detection
Helping employees make better decisions
These applications are wide-reaching and strategic and will significantly benefit from organizational transformation as they are designed, built, and rolled out.
Whether you choose to buy or build your AI-based solutions, you'll need continuous collaboration between the board, senior management, and project leaders. Further collaboration will be required between line managers, data scientists, AI engineers, and the users of the solution.
Please let me know if you see additional trends that can impact your corporate success in AI. I'd love to hear from you.
Continued here:
The Top Five Trends In AI: How To Prepare For AI Success - Forbes
Posted in Ai
Comments Off on The Top Five Trends In AI: How To Prepare For AI Success – Forbes
AI in next-generation connected systems – Ericsson
Posted: at 9:29 am
We see that hybrid approaches will be useful in next-generation intelligent systems where robust learning of complex models is combined with symbolic logic that provides knowledge representation, reasoning, and explanation facilities. The knowledge could be, for example, universal laws of physics or the best-known methods in a specific domain.
Intelligent systems must be endowed with the ability to make decisions autonomously to fulfill given objectives, robustness to be able to solve a problem in several different ways, and flexibility in decision-making by utilizing various pieces of both prepopulated and learned knowledge.
In a large distributed system, decisions are made at different locations and levels. Some decisions are based on local data and governed by tight control loops with low latency. Other decisions are more strategic, affect the system globally, and are made based on data collected from many different sources. Decisions made at a higher global level may also require real-time responses in critical cases such as power-grid failures, cascading node failures, and so on. The intelligence that automates such large and complex systems must reflect their distributed nature and support the management topology.
Data generated at the edge, in device or network edge node, will at times need to be processed in place. It may not always be feasible to transfer data to a centralized cloud; there may be laws governing where data can reside as well as privacy or security implications for data transfer. The scale of decisions in these cases is restricted to a small domain, so the algorithms and computing power necessary are usually fast and light. However, local models could be based on incomplete and biased statistics, which may lead to loss of performance. There is a need to leverage the scale of distribution, make appropriate abstractions of local models and transfer the insights gained to other local models.
Learning about global data patterns from multiple networked devices or nodes without having access to the actual data is also possible. Federated learning has paved the way on this front and more distributed training patterns such as vertical federated learning or split learning have emerged. These new architectures allow machine-learning models to adapt their deployments to the requirements they are to fulfill in terms of data transfer or compute, as well as memory and network resource consumption while maintaining excellent performance guarantees. However, more research is needed, in particular, to cater to different kinds of models and model combinations and stronger privacy guarantees.
A common distributed and decentralized paradigm is required to make the best use of local and global data and models as well as determine how to distribute learning and reasoning across nodes to fulfill extreme latency requirements. Such paradigms themselves may be built using machine learning and other AI techniques to incorporate features of self-management, self-optimization, and self-evolution.
AI-based autonomous systems comprise complex models and algorithms; moreover, these models evolve over time with new data and knowledge without manual intervention. The dependence on data, the complexity of algorithms, and the possibility of unexpected emergent behavior of the AI-based systems requires new methodologies to guarantee transparency, explainability, technical robustness and safety, privacy and data governance, nondiscrimination and fairness, human agency and oversight, and societal and environmental wellbeing and accountability. These elements are crucial for ensuring that humans can understand and consequently establish calibrated trust in AI-based systems [5].
Explainable AI(XAI) is used to achieve transparency of AI-based systems that explain for the stakeholder why and how the AI algorithm arrived at a specific decision. The methods are applicable to multiple AI techniques like supervised learning, reinforcement learning (RL), machine reasoning, and so on [5]. XAI is acknowledged as being a crucial feature for the practical deployment of AI models in systems, for satisfying the fundamental rights of AI users related to AI decision-making, and is essential in telecommunications where standardization bodies such as ETSI and IEEE emphasize the need for XAI for the trustworthiness of intelligent communication systems.
The evolving nature of AI models requires either new approaches or extensions to the existing approaches to ensure the robustness and safety of AI models during both training and deployment in the real world. Along with statistical guarantees provided by adversarial robustness, formal verification techniques could be tailored to give deterministic guarantees for safety-critical AI-based systems. Security is one of the contributors to robustness, where both data and models are to be protected from malicious attacks. Privacy of the data, that is, the source and the intended use, must be preserved. The models themselves must not leak privacy information. Furthermore, data should be validated for fairness and domain expectations because of the bias it can introduce to AI decisions.
Since the stakeholders of AI systems are ultimately humans, methods such as those based on causal reasoning and data provenance need to be developed to provide accountability of decisions. The systems should be designed to continuously learn and refine the stakeholder requirements they are set to meet and escalate to a higher level of automated decision making or eventually to human level when they do not have sufficient confidence in certain decisions.
Connected, intelligent machines of varied types are becoming more present in our lives, ranging from virtual assistants to collaborative robots or cobots [6]. For a proper collaboration, it is essential that these machines can understand human needs and intents accurately. Furthermore, all data related to these machines should be available for situational awareness. AI is fundamental throughout this process to enhance the capabilities and collaboration of humans and machines.
Advances in natural language processing and computer vision have made it possible for machines to have a more accurate interpretation of human inputs. This is leveraged by considering nonverbal communication, such as body language and tone of voice. The accurate detection of emotions is now evolving and can support the identification of more complex behaviors, such as tiredness and distraction. In addition, progress in areas such as scene understanding and semantic-information extraction is crucial to having a complete knowledge representation of the environment (see Figure 3). All the perceptual information should be used by the machine to determine the optimum action that maximizes the collaboration. Reinforcement learning (RL), which is where a policy is trained to take the best action, given the current state and observation of the environment, is receiving increasingly more attention [6]. To avoid unsafe situations, strategies like safe AI are under investigation to ensure safety along the RL model life cycle. Details of RL is provided in the next section.
AI has also enabled a more complete understanding of how the machine operates with the aid of digital twins. Extended reality (XR) devices are becoming more present in mixed- reality setups to visualize detailed data of machines and interact with digital twins at the same time. This increases human understanding of how machines are operating and helps anticipate their actions. In combination with the XR interface, XAI can be applied to provide reasons for a certain decision taken by the machine.
To make collaboration happen, it is also important that the machines respond and interact with humans in a timely manner. As AI methods involved in the collaborative setup can have high computing complexity and machines might have constrained hardware resources, a distributed intelligence solution is required to achieve real-time responses. This means that the communication infrastructure plays a key role in the whole process by supporting ultra-reliable and low-latency communication networks.
See the original post here:
Posted in Ai
Comments Off on AI in next-generation connected systems – Ericsson
The term ‘AI’ overpromises: Here’s how to make it work for humans instead – Big Think
Posted: at 9:29 am
One of the popular memes in literature, movies and tech journalism is that man's creation will rise and destroy it.
Lately, this has taken the form of a fear of AI becoming omnipotent, rising up and annihilating mankind.
The economy has jumped on the AI bandwagon; for a certain period, if you did not have "AI" in your investor pitch, you could forget about funding. (Tip: If you are just using a Google service to tag some images, you are not doing AI.)
However, is there actually anything deserving of the term AI? I would like to make the point that there isn't, and that our current thinking is too focused on working on systems without thinking much about the humans using them, robbing us of the true benefits.
What companies currently employ in the wild are nearly exclusively statistical pattern recognition and replication engines. Basically, all those systems follow the "monkey see, monkey do" pattern: They get fed a certain amount of data and try to mimic some known (or fabricated) output as closely as possible.
When used to provide value, you give them some real-life input and read the predicted output. What if they encounter things never seen before? Well, you better hope that those "new" things are sufficiently similar to previous things, or your "intelligent" system will give quite stupid responses.
But there is not the slightest shred of understanding, reasoning and context in there, just simple re-creation of things seen before. An image recognition system trained to detect sheep in a picture does not have the slightest idea what "sheep" actually means. However, those systems have become so good at recreating the output, that they sometimes look like they know what they are doing.
Isn't that good enough, you may ask? Well, for some limited cases, it is. But it is not "intelligent", as it lacks any ability to reason and needs informed users to identify less obvious outliers with possibly harmful downstream effects.
The ladder of thinking has three rungs, pictured in the graph below:
Image: Notger Heinz
Imitation: You imitate what you have been shown. For this, you do not need any understanding, just correlations. You are able to remember and replicate the past. Lab mice or current AI systems are on this rung.
Intervention: You understand causal connections and are able to figure out what would happen if you now would do this, based on what you learned about the world in the past. This requires a mental model of the part of the world you want to influence and the most relevant of its downstream dependencies. You are able to imagine a different future. You meet dogs and small children on that rung, so it is not a bad place to be.
Counterfactual reasoning: The highest rung, where you wonder what would have happened, had you done this or that in the past. This requires a full world model and a way to simulate the world in your head. You are able to imagine multiple pasts and futures. You meet crows, dolphins and adult humans here.
In order to ascend from one rung to the next, you need to develop a completely new set of skills. You can't just make an imitation system larger and expect it to suddenly be able to reason. Yet this is what we are currently doing with our ever-increasing deep learning models: We think that by giving them more power to imitate, they will at some point magically develop the ability to think. Apart from self-delusional hope and selling nice stories to investors and newspapers, there is little reason to believe that.
And we haven't even touched the topic of computational complexity and economical and ecological impact of ever-growing models. We might simply not be able to grow our models to the size needed, even if the method worked (which it doesn't, so far).
Whatever those systems create is the mere semblance of intelligence and in pursuing the goal of generating artificial intelligence by imitation, we are following a cargo cult.
Instead, we should get comfortable with the fact that the current ways will not achieve real AI, and we should stop calling it that. Machine learning (ML) is a perfectly fitting term for a tool with awesome capabilities in the narrow fields where it can be applied. And with any tool, you should not try to make the entire world your nail, but instead find out where to use it and where not.
Machines are strong when it comes to quickly and repeatedly performing a task with minimal uncertainty. They are the ruling class of the first rung.
Humans are strong when it comes to context, understanding and making sense with very little data at hand and high uncertainties. They are the ruling class of the second and third rung.
So what if we focus our efforts away from the current obsession with removing the human element from everything and thought about combining both strengths? There is an enormous potential in giving machine learning systems the optimal, human-centric shape, in finding the right human-machine interface, so that both can shine. The ML system prepares the data, does some automatable tasks and then hands the results to the human, who further handles them according to context.
ML can become something like good staff to a CEO, a workhorse to a farmer or a good user interface to an app user: empowering, saving time, reducing mistakes.
Building a ML system for a given task is rather easy and will become ever easier. But finding a robust, working integration of the data and the pre-processed results of the data with the decision-maker (i.e. human) is a hard task. There is a reason why most ML projects fail at the stage of adoption/integration with the organization seeking to use them.
Solving this is a creative task: It is about domain understanding, product design and communication. Instead of going ever bigger to serve, say, more targetted ads, the true prize is in connecting data and humans in clever ways to make better decisions and be able to solve tougher and more important problems.
Republished with permission of the World Economic Forum. Read the original article.
From Your Site Articles
Related Articles Around the Web
See the rest here:
The term 'AI' overpromises: Here's how to make it work for humans instead - Big Think
Posted in Ai
Comments Off on The term ‘AI’ overpromises: Here’s how to make it work for humans instead – Big Think
How AI and 5G will power the next wave of innovation – ZDNet
Posted: at 9:29 am
In the next 10 years, artificial intelligence is expected to transform every industry, and the catalyst for this transformation is 5G. Together, the two technologies will enable fast, secure, and cost-effective deployment of internet of things devices and smart networks.
AI-powered 5G networks will accelerate the "fourth industrial revolution and create unprecedented opportunities in business and society," Ronnie Vasishta, senior vice president of telecom at graphical chipmaker and software platform developer NVIDIA, said in a special address at the 2021 Mobile World Congress in Barcelona several weeks ago.
"Billions of things are located throughout the network and data centers. A ubiquitous 5G network will connect these data centers and intelligent things at the rate, latency, cost, and power required by the application," Vasishta said. "As this network morphs to adapt to 5G, not only will AI drive innovation, but it will also be required to manage, organize, and increase the efficiency of the network itself."
Unlike previous wireless tech generations, 5G was born in the cloud era and designed specifically for IoT. 5G can connect billions of sensorssuch as video camerasto edge data centers for AI processing.
Here are four real-world examples of where the combination of AI and 5G connectivity is reshaping industries:
Thousands of cameras monitoring automated vehicle assembly. Visual inspection software with deep learning algorithms is used to recognize defects in vehicles. This allows car manufacturers to analyze and identify quality issues on the assembly line.
Urban planning and traffic management for smart cities. In an environment where massive amounts of people and things interact with each other, AI-powered visual inspection software monitors all moving and non-moving elements to improve city safety, space management, and traffic.
Conversational AI and natural language processing enabling future services. Chatbots, voice assistants, and other messaging services are helping various industries automate customer support. Conversational AI is evolving to include new ways of communicating with humans using facial expression and contextual awareness.
Powerful edge computing for extended reality. Virtual reality and augmented reality are no longer tethered by cables to workstations. Thanks to advanced wireless technologies such as 5G, industry professionals can make real-time design changes in AR or be virtually present anywhere in VR.
NVIDIA has been developing AI solutions for more than a decade, working with an extensive ecosystem of independent software vendors and startups on the NVDIA platform. The company recently partnered with Google Cloud to establish an AI-on-5G Innovation Lab, which network infrastructure and AI software providers will use to develop, test, and launch 5G/AI apps.
NVIDIA's AI-on-5G portfolio includes a unified platform, servers, software-defined 5G virtual radio area networks, enterprise AI apps, and software development kits such as Isaac and Metropolis. A commercial version of NVIDIA AI-on-5G will become available in the second half of this calendar year.
Back in April, NVIDIA launched Aerial A100, which, according to Vasishta, is a "new type of computing platform designed for the (network) edge, combining AI and 5G into EGX for the enterprise." NVIDIA EGX is an accelerated computing platform that allows continuous streaming of data between 5G base stations, warehouses, stores, and other locations. When implementing EGX with Aerial A100, organizations get a complete AI suite of capabilities.
5G and AI infrastructures today are inefficient because they're deployed and managed separately. For enterprises, running AI and 5G on the same computing platform reduces equipment, power, and space costs, while providing greater security for AI apps. For telcos, deploying AI apps over 5G opens up new uses cases and revenue streams. They can convert every 5G base station to an edge data center to support both 5G workloads and AI services.
Telcos and enterprises can greatly benefit from converged platforms like NVIDIA's AI-on-5G, where 5G serves as a secure, ultra-reliable, and cost-effective communication fabric between sensors and AI apps.
See the rest here:
How AI and 5G will power the next wave of innovation - ZDNet
Posted in Ai
Comments Off on How AI and 5G will power the next wave of innovation – ZDNet