Monthly Archives: August 2022

USM School of Mathematics and Natural Science adds a Physics Masters Degree to Online Learners – The University of Southern Mississippi

Posted: August 30, 2022 at 11:24 pm

Fri, 08/26/2022 - 10:53am | By: Josh Stricklin

The University of Southern Mississippi (USM) continues to expand its online catalog with the addition of the Physics Master of Science degree. The addition of this program means USM offers a practical and accessible opportunity for students to continue their education in physics.

This program offers a unique opportunity to explore a variety of research topics. Dr. Michael Vera, Associate Professor of Physics and Astronomy, says The Physics Masters program consists of courses in Classical Mechanics, Quantum Mechanics, Statistical Physics and Electromagnetism as well as research opportunities in a variety of fields including computation, materials science, nuclear theory, optics and wave propagation.

These four core classes give students a shared knowledge in physics while allowing them to pursue their own interests within the field. And USMs online delivery gives students a unique chance to conduct their work at a distance while having regular access to the professors.

With its synchronous format, says Dr. Vera, the online option for the Physics Masters degree provides both the accessibility of remote delivery and the personal interaction of a traditional classroom. Students are able to ask questions, and benefit from the questions of other students, during live class sessions.

With a full spectrum of classes and strong computational focus, the Masters in Physics program at USM sets students up for a wide range of potential careers. While education and research labs are clear employers, graduates of the Physics program can find themselves in myriad fields ranging from modeling and software development to engineering or even financial modeling.

The online Physics Masters degree is an incredible opportunity for students, says Dr. Tom Hutchinson, Dean of Online Learning and Enrollment for the Office of Online Learning. Students can continue their education where they live, while taking advantage of USMs amazing faculty in real time. And because USM offers a wide range of research possibilities, students can do what they love in a field in which they have a genuine interest.

The online Physics MS is designed to help students achieve everything they can in their careers. This program admits students during the fall, spring, and summer semesters, and with USM's online delivery, students can finish school with a world of potential at their fingertips. Students looking to grow their knowledge in physics should visit the online Physics MS page.

See the original post here:

USM School of Mathematics and Natural Science adds a Physics Masters Degree to Online Learners - The University of Southern Mississippi

Posted in Quantum Physics | Comments Off on USM School of Mathematics and Natural Science adds a Physics Masters Degree to Online Learners – The University of Southern Mississippi

Quantum computing is an even bigger threat than artificial intelligence – here’s why – WRAL TechWire

Posted: at 11:24 pm

Compounding the danger is the lack of anyAI regulation. Instead, unaccountable technology conglomerates, such as Google and Meta, have assumed the roles of judge and jury in all things AI. They are silencing dissenting voices, including their own engineers who warn of the dangers.

The worlds failure to rein in the demon of AIor rather, the crude technologies masquerading as suchshould serve to be a profound warning. There is an even more powerful emerging technology with the potential to wreak havoc, especially if it is combined with AI:quantum computing. We urgently need to understand this technologys potential impact, regulate it, and prevent it from getting into the wrong hands before it is too late. The world must not repeat the mistakes it made by refusing to regulate AI.

Although still in its infancy, quantum computing operates on a very different basis from todays semiconductor-based computers. If thevarious projectsbeing pursued around the world succeed, these machines will be immensely powerful, performing tasks in seconds that would takeconventional computersmillions of years to conduct.

Because of the technologys immense power and revolutionary applications, quantum computing projects are likely part of defense and other government research already.

Semiconductors represent information as a series of 1s and 0sthats why we call it digital technology. Quantum computers, on the other hand, use a unit of computing called aqubit. A qubit can hold values of 1 and 0 simultaneously by incorporating a counterintuitive property in quantum physics called superposition. (If you find this confusing, youre in good companyit can be hard to grasp even for experienced engineers.) Thus, two qubits could represent the sequences 1-0, 1-1, 0-1, and 0-0, all in parallel and all at the same instant. That allows a vast increase in computing power, which grows exponentially with each additional qubit.

Quantum computing researchers at Duke observe tipping point

If quantum physics leaves the experimental stage and makes it into everyday applications, it will find many uses and change many aspects of life. With their power to quickly crunch immense amounts of data that would overwhelm any of todays systems,quantum computerscould potentially enable better weather forecasting, financial analysis, logistics planning, space research, and drug discovery. Some actors will very likely use them for nefarious purposes, compromising bank records, private communications, and passwords on every digital computer in the world. Todays cryptography encodes data in large combinations of numbers that are impossible to crack within a reasonable time using classic digital technology. But quantum computerstaking advantage of quantum mechanical phenomena, such as superposition, entanglement, and uncertaintymay potentially be able to try out combinations so rapidly that they could crack encryptions by brute force almost instantaneously.

To be clear, quantum computing is still in an embryonic stagethough where, exactly, we can only guess. Because of the technologys immense potential power and revolutionary applications, quantum computing projects are likely part of defense and other government research already. This kind of research isshrouded in secrecy, and there are a lot of claims and speculation about milestones being reached. China, France, Russia, Germany, the Netherlands, Britain, Canada, and India are known to be pursuing projects. In the United States, contenders include IBM, Google, Intel, and Microsoft as well as various start-ups, defense contractors, and universities.

Despite the lack of publicity, there have been credible demonstrations of some basic applications, includingquantum sensorsable to detect and measure electromagnetic signals. One such sensor was used to precisely measureEarths magnetic fieldfrom the International Space Station.

IBM unveils roadmap for developing quantum-powered supercomputers

In another experiment, Dutch researchers teleported quantum information across a rudimentaryquantum communication network. Instead of using conventional optical fibers, the scientists used three small quantum processors to instantly transfer quantum bits from a sender to a receiver. These experiments havent shown practical applications yet, but they could lay the groundwork for a future quantum internet, where quantum data can be securely transported across a network of quantum computers faster than the speed of light. So far, thats only been possible in the realm of science fiction.

The Biden administration considers the risk of losing the quantum computing race imminent and dire enough that it issuedtwo presidential directivesin May: one to place theNational Quantum Initiativeadvisory committee directly under the authority of the White House and another to directgovernment agenciesto ensure U.S. leadership in quantum computing while mitigating the potential security risks quantum computing poses to cryptographic systems.

Experiments are also working tocombinequantum computing with AI to transcend traditional computers limits. Today, large machine-learning models take months to train on digital computers because of the vast number of calculations that must be performedOpenAIs GPT-3, for example, has 175 billion parameters. When these models grow into the trillions of parametersa requirement for todays dumb AI to become smartthey will take even longer to train. Quantum computers could greatly accelerate this process while also using less energy and space. In March 2020, Google launchedTensorFlowQuantum, one of the first quantum-AI hybrid platforms that takes the search for patterns and anomalies in huge amounts of data to the next level.Combined with quantum computing, AI could, in theory, lead to even more revolutionary outcomes than the AI sentience that critics have been warning about.

Quantum breakthrough? Duke, IonQ invent means to accelerate key quantum techniques

Given the potential scope and capabilities ofquantum technology, it is absolutely crucial not to repeat the mistakes made with AIwhere regulatory failure has given the world algorithmic bias that hypercharges human prejudices, social media that favors conspiracy theories, and attacks on the institutions of democracy fueled by AI-generated fake news and social media posts. The dangers lie in the machines ability to make decisions autonomously, with flaws in the computer code resulting in unanticipated, often detrimental, outcomes. In 2021, the quantum community issued acall for actionto urgently address these concerns. In addition, critical public and private intellectual property on quantum-enabling technologies must be protected fromtheft and abuseby the United States adversaries.

There are national defense issues involved as well. In security technology circles, the holy grail is whats called acryptanalytically relevant quantum computera system capable of breaking much of the public-key cryptography that digital systems around the world use, which would enable blockchain cracking, for example. Thats a very dangerous capability to have in the hands of an adversarial regime.

Experts warn thatChinaappears to have a lead in various areas of quantum technology, such as quantum networks and quantum processors. Two of the worlds most powerful quantum computers were beenbuilt in China, and as far back as 2017, scientists at the University of Science and Technology of China in Hefei built the worlds firstquantum communication networkusing advanced satellites. To be sure, these publicly disclosed projects are scientific machines to prove the concept, with relatively little bearing on the future viability of quantum computing. However, knowing that all governments are pursuing the technology simply to prevent an adversary from being first, these Chinese successes could well indicate an advantage over the United States and the rest of the West.

Beyond accelerating research, targeted controls on developers, users, and exports should therefore be implemented without delay. Patents, trade secrets, and relatedintellectual property rightsshould be tightly secureda return to the kind of technology control that was a major element of security policy during the Cold War. The revolutionary potential of quantum computing raises the risks associated withintellectual property theftby China and other countries to a new level.

Exec shares six predictions for quantum computing industry in 2022

Finally, to avoid theethical problemsthat went so horribly wrong with AI and machine learning, democratic nations need to institute controls that both correspond to the power of the technology as well as respect democratic values, human rights, and fundamental freedoms. Governments must urgently begin to think about regulations,standards, and responsible usesand learn from the way countries handled or mishandled other revolutionary technologies, including AI, nanotechnology, biotechnology, semiconductors, and nuclear fission. The United States and otherdemocratic nationsmust not make the same mistake they made with AIand prepare for tomorrows quantum era today.

About the authors

Vivek Wadhwais a columnist atForeign Policy, an entrepreneur, and the co-author ofFrom Incremental to Exponential: How Large Companies Can See the Future and Rethink Innovation.Twitter:@wadhwa

Mauritz Kopis a fellow and visiting scholar at Stanford University.Twitter:@MauritzKop

Duke Quantum Center officially opens, offering a look at computings future

View original post here:

Quantum computing is an even bigger threat than artificial intelligence - here's why - WRAL TechWire

Posted in Quantum Physics | Comments Off on Quantum computing is an even bigger threat than artificial intelligence – here’s why – WRAL TechWire

Gravity Has Stayed Constant For The Entire Age of The Universe, Study Finds – ScienceAlert

Posted: at 11:24 pm

For over a century, astronomers have known that the Universe has been expanding since the Big Bang. For the first 8 billion years, the expansion rate was relatively consistent since it was held back by the force of gravitation.

However, thanks to missions like the Hubble Space Telescope, astronomers have since learned that roughly 5 billion years ago, the rate of expansion has been accelerating.

This led to the widely-accepted theory that a mysterious force is behind the expansion (known as Dark Energy), while some insist that the force of gravity may have changed over time.

This is a contentious hypothesis since it means that Einstein's General Theory of Relativity (which has been validated nine ways from Sunday) is wrong.

But according to a new study by the international Dark Energy Survey (DES) Collaboration, the nature of gravity has remained the same throughout the entire history of the Universe.

These findings come shortly before two next-generation space telescopes (Nancy Grace Roman and Euclid) are sent to space to conduct even more precise measurements of gravity and its role in cosmic evolution.

The DES Collaboration comprises researchers from universities and institutes in the US, UK, Canada, Chile, Spain, Brazil, Germany, Japan, Italy, Australia, Norway, and Switzerland.

Their third-year findings were presented at the International Conference on Particle Physics and Cosmology (COSMO'22), which took place in Rio de Janeiro from August 22nd to 26th.

They were also shared in a paper titled "Dark Energy Survey Year 3 Results: Constraints on extensions to Lambda CDM with weak lensing and galaxy clustering" that appeared in the American Physical Society journal Physical Review D.

Einstein's General Theory of Relativity, which he finalized in 1915, describes how the curvature of spacetime is altered in the presence of gravity.

For over a century, this theory has accurately predicted almost everything in our Universe, from Mercury's orbit and gravitational lensing to the existence of black holes.

But between the 1960s and 1990s, two discrepancies were discovered that led astronomers to wonder if Einstein's theory was correct. First, astronomers noted that the gravitational effects of massive structures (like galaxies and galaxy clusters) did not accord with their observed mass.

This gave rise to the theory that space is filled with an invisible mass that interacts with 'normal' (aka. 'luminous' or visible) matter via gravity. Meanwhile, the observed expansion of the cosmos (and how it is subject to acceleration) gave rise to the theory of Dark Energy and the Lambda Cold Dark Matter (Lambda CDM) cosmological model.

Cold Dark Matter is an interpretation where this mass is composed of large, slow-moving particles while Lambda represents Dark Energy. In theory, these two forces constitute 95 percent of the total mass-energy content of the Universe, yet all attempts to find direct evidence of them have failed.

The only possible alternative is that Relativity needs to be modified to account for these discrepancies. To find out if that's the case, members of the DES used the Victor M. Blanco 4-meter Telescope at the Cerro Telolo Inter-American Observatory in Chile to observe galaxies up to 5 billion light-years away.

They hoped to determine if gravity has varied over the past 5 billion years (since the acceleration began) or over cosmic distances. They also consulted data from other telescopes, including the ESA's Planck satellite, which has been mapping the Cosmic Microwave Background (CMB) since 2009.

They paid close attention to how the images they saw contained subtle distortions due to dark matter (gravitational lenses). As the first image released from the James Webb Space Telescope (JWST) illustrated, scientists can infer the strength of gravity by analyzing the extent to which a gravitational lens distorts spacetime.

So far, the DES Collaboration has measured the shapes of over 100 million galaxies, and the observations all match what General Relativity predicts. The good news is that Einstein's theory still holds, but this also means that the mystery of Dark Energy persists for the time being.

Luckily, astronomers will not have to wait long before new and more detailed data is available. First, there's the ESA's Euclid mission, slated for launch by 2023 at the latest. This mission will map the geometry of the Universe, looking 8 billion years into the past to measure the effects of Dark Matter and Dark Energy.

By May 2027, it will be joined by NASA's Nancy Grace Roman Space Telescope, which will look back over 11 billion years. These will be the most detailed cosmological surveys ever conducted and are expected to provide the most compelling evidence for (or against) the Lambda-CDM model.

As study co-author Agns Fert, who conducted the research as a postdoctoral researcher at JPL, said in a recent NASA press release:

"There is still room to challenge Einstein's theory of gravity, as measurements get more and more precise. But we still have so much to do before we're ready for Euclid and Roman. So it's essential we continue to collaborate with scientists around the world on this problem as we've done with the Dark Energy Survey."

In addition, observations provided by Webb of the earliest stars and galaxies in the Universe will allow astronomers to chart the evolution of the cosmos from its earliest periods. These efforts have the potential to answer some of the most pressing mysteries in the Universe.

These include how Relativity and the observed mass and expansion of the Universe coincide but could also provide insight into how gravity and the other fundamental forces of the Universe (as described by quantum mechanics) interact a Theory of Everything (ToE).

If there's one thing that characterizes the current era of astronomy, it is the way that long-term surveys and next-generation instruments are coming together to test what has been the stuff of theory until now.

The potential breakthroughs that these could lead to are sure to both delight and confound us. But ultimately, they will revolutionize the way we look at the Universe.

This article was originally published by Universe Today. Read the original article.

Go here to read the rest:

Gravity Has Stayed Constant For The Entire Age of The Universe, Study Finds - ScienceAlert

Posted in Quantum Physics | Comments Off on Gravity Has Stayed Constant For The Entire Age of The Universe, Study Finds – ScienceAlert

Artificial Intelligence, Critical Systems, and the Control Problem – HS Today – HSToday

Posted: at 11:21 pm

Artificial Intelligence (AI) is transforming our way of life from new forms of social organization and scientific discovery to defense and intelligence. This explosive progress is especially apparent in the subfield of machine learning (ML), where AI systems learn autonomously by identifying patterns in large volumes of data.[1] Indeed, over the last five years, the fields of AI and ML have witnessed stunning advancements in computer vision (e.g., object recognition), speech recognition, and scientific discovery.[2], [3], [4], [5] However, these advances are not without risk as transformative technologies are generally accompanied by a significant risk profile, with notable examples including the discovery of nuclear energy, the Internet, and synthetic biology. Experts are increasingly voicing concerns over AI risk from misuse by state and non-state actors, principally in the areas of cybersecurity and disinformation propagation. However, issues of control for example, how advanced AI decision-making aligns with human goals are not as prominent in the discussion of risk and could ultimately be equally or more dangerous than threats from nefarious actors. Modern ML systems are not programmed (as programming is typically understood), but rather independently developed strategies to complete objectives, which can be mis-specified, learned incorrectly, or executed in unexpected ways. This issue becomes more pronounced as AI becomes more ubiquitous and we become more reliant on AI decision-making. Thus, as AI is increasingly entwined through tightly coupled critical systems, the focus must expand beyond accidents and misuse to the autonomous decision processes themselves.

The principal mid- to long-term risks from AI systems fall into three broad categories: risks of misuse or accidents, structural risks, and misaligned objectives. The misuse or accident category includes things such as AI-enabled cyber-attacks with increased speed and effectiveness or the generation and distribution of disinformation at scale.[6] In critical infrastructures, AI accidents could manifest as system failures with potential secondary and tertiary effects across connected networks. A contemporary example of an AI accident is the New York Stock Exchange (NYSE) Flash Crash of 2010, which drove the market down 600 points in 5 minutes.[7] Such rapid and unexpected operations from algorithmic trading platforms will only increase in destructive potential as systems increase in complexity, interconnectedness, and autonomy.

The structural risks category is concerned with how AI technologies shape the social and geopolitical environment in which they are deployed. Important contemporary examples include the impact of social media content selection algorithms on political polarization or uncertainty in nuclear deterrence and the offense-to-defense balance.[8],[9] For example, the integration of AI into critical systems, including peripheral processes (e.g., command and control, targeting, supply chain, and logistics), can degrade multilateral trust in deterrence.[10] Indeed, increasing autonomy in all links of the national defense chain, from decision support to offensive weapons deployment, compounds the uncertainty already under discussion with autonomous weapons.[11]

Misaligned objectives is another important failure mode. Since ML systems develop independent strategies, a concern is that the AI systems will misinterpret the correct objectives, develop destructive subgoals, or complete them in an unpredictable way. While typically grouped together, it is important to clarify the differences between a system crash and actions executed by a misaligned AI system so that appropriate risk mitigation measures can be evaluated. Understanding the range of potential failures may help in the allocation of resources for research on system robustness, interpretability, or AI alignment.

At its most basic level, AI alignment involves teaching AI systems to accurately capture what we want and complete it in a safe and ethical manner. Misalignment of AI systems poses the highest downside risk of catastrophic failures. While system failures by themselves could be immensely damaging, alignment failures could include unexpected and surprising actions outside the systems intent or window of probability. However, ensuring the safe and accurate interpretation of human objectives is deceptively complex in AI systems. On the surface, this seems straightforward, but the problem is far from obvious with unimaginably complex subtleties that could lead to dangerous consequences.

In contrast with nuclear weapons or cyber threats, where the risks are more obvious, risks from AI misalignment can be less clear. These complexities have led to misinterpretation and confusion with some attributing the concerns to disobedient or malicious AI systems.[12] However, the concerns are not that AI will defy its programming but rather that it will follow the programming exactly and develop novel, unanticipated solutions. In effect, the AI will pursue the objective accurately but may yield an unintended, even harmful, consequence. Googles Alpha Go program, which defeated the world champion Go[13] player in 2016, provides an illustrative example of the potential for unexpected solutions. Trained on millions of games, Alpha Gos neural network learned completely unexpected actions outside of the human frame of reference.[14] As Chris Anderson explains, what took the human brain thousands of years to optimize Googles Alpha Go completed in three years, executing better, almost alien solutions that we hadnt even considered.[15] This novelty illustrates how unpredictable AI systems can be when permitted to develop their own strategies to accomplish a defined objective.

To appreciate how AI systems pose these risks, by default, it is important to understand how and why AI systems pursue objectives. As described, ML is designed not to program distinct instructions but to allow the AI to determine the most efficient means. As learning progresses, the training parameters are adjusted to minimize the difference between the pursued objective and the actual value by incentivizing positive behavior (known as reinforcement learning, or RL).[16],[17] Just as humans pursue positive reinforcement, AI agents are goal-directed entities, designed to pursue objectives, whether the goal aligns with the original intent or not.

Computer science professor Steve Omohundro illustrates a series of innate AI drives that systems will pursue unless explicitly counteracted.[18] According to Omohundro, distinct from programming, AI agents will strive to self-improve, seek to acquire resources, and be self-protective.[19] These innate drives were recently demonstrated experimentally, where AI agents tend to seek power over the environment to achieve objectives most efficiently.[20] Thus, AI agents are naturally incentivized to seek out useful resources to accomplish an objective. This power-seeking behavior was reported by Open AI, where two teams of agents, instructed to play hide-and-seek in a simulated environment, proceeded to horde objects from the competition in what Open AI described as tool use distinct from the actual objective.[21] The AI teams learned that the objects were instrumental in completing the objective.[22] Thus, a significant concern for AI researchers is the undefined instrumental sub-goals that are pursued to complete the final objective. This tendency to instantiate sub-goals is coined the instrumental convergence thesis by Oxford philosopher Nick Bostrom. Bostrom postulated that intermediate sub-goals are likely to be pursued by an intelligent agent to complete the final objective more efficiently.[23] Consider an advanced AI system optimized to ensure adequate power between several cities. The agent could develop a sub-goal of capturing and redirecting bulk power from other locations to ensure power grid stability. Another example is an autonomous weapons system designed to identify targets that develop a unique set of intermediate indicators to determine the identity and location of the enemy. Instrumental sub-goals could be as simple as locking a computer-controlled access door or breaking traffic laws in an autonomous car, or as severe as destabilizing a regional power grid or nuclear power control system. These hypothetical and novel AI decision processes raise troubling questions in the context of conflict or safety of critical systems. The range of possible AI solutions are too large to consider and can only get more consequential as systems become more capable and complex. The effect of AI misalignment could be disastrous if the AI discovers an unanticipated optimal solution to a problem that results in a critical system becoming inoperable or yielding a catastrophic result.

While the control problem is troubling by itself, the integration of multiagent systems could be far more dangerous and could lead to other (as of now unanticipated) failure modes between systems. Just like complex societies, complex agent communities could manifest new capabilities and emergent failure modes unique to the complex system. Indeed, AI failures are unlikely to happen in isolation and the roadmap for multiagent AI environments is currently underway in both the public and private sectors.

Several U.S. government initiatives for next-generation intelligent networks include adaptive learning agents for autonomous processes. The Armys Joint All-Domain Command and Control (JADC2) concept for networked operations and the Resilient and Intelligent Next-Generation Systems (RINGS) program, put forth by the National Institute of Standards and Technology (NIST), are two notable ongoing initiatives.[24], [25] Literature on cognitive Internet of Things (IoT) points to the extent of autonomy planned for self-configuring, adaptive AI communities and societies to steer networks through managing user intent, supervision of autonomy, and control.[26] A recent report from the worlds largest technical professional organization, IEEE, outlines the benefits of deep reinforcement learning (RL) agents for cyber security, proposing that, since RL agents are highly capable of solving complex, dynamic, and especially high-dimensional problems, they are optimal for cyber defense.[27] Researchers propose that RL agents be designed and released autonomously to configure the network, prevent cyber exploits, detect and counter jamming attacks, and offensively target distributed denial-of-service attacks.[28] Other researchers submitted proposals for automated penetration-testing, the ability to self-replicate the RL agents, while others propose cyber-red teaming autonomous agents for cyber-defense.[29], [30], [31]

Considering the host of problems discussed from AI alignment, unexpected side effects, and the issue of control, jumping headfirst into efforts that give AI meaningful control over critical systems (such as the examples described above) without careful consideration of the potential unexpected (or potentially catastrophic) outcomes does not appear to be the appropriate course of action. Proposing the use of one autonomous system in warfare is concerning but releasing millions into critical networks is another matter entirely. Researcher David Manheim explains that multiagent systems are vulnerable to entirely novel risks, such as over-optimization failures, where optimization pressure allows individual agents to circumvent designed limits.[32] As Manheim describes, In many-agent systems, even relatively simple systems can become complex adaptive systems due to agent behavior.[33] At the same time, research demonstrates that multiagent environments lead to greater agent generalization, thus reducing the capability gap that separates human intelligence from machine intelligence.[34] In contrast, some authors present multiagent systems as a viable solution to the control problem, with stable, bounded capabilities, and others note the broad uncertainty and potential for self-adaptation and mutation.[35] Yet, the author admits that there are risks and the multiplicative growth of RL agents could potentially lead to unexpected failures, with the potential for the manifestation of malignant agential behaviors.[36],[37] AI researcher Trent McConaughy highlights the risk from adaptive AI systems, specifically decentralized autonomous organizations (DAO) in blockchain networks. McConaughy suggests that rather than a powerful AI system taking control of resources, as is typically discussed, the situation may be far more subtle where we could simply hand over global resources to self-replicating communities of adaptive AI systems (e.g., Bitcoins increasing energy expenditures that show no sign of slowing).[38]

Advanced AI capabilities in next-generation networks that dynamically reconfigure and reorganize network operations hold undeniable risks to security and stability.[39],[40] A complex landscape of AI agents, designed to autonomously protect critical networks or conduct offensive operations, would invariably need to develop subgoals to manage the diversity of objectives. Thus, whether individual systems or autonomous collectives, the web of potential failures and subtle side-effects could unleash unpredictable dangers leading to catastrophic second- and third-order effects. As AI systems are currently designed, understanding the impact of the subgoals (or even their existence) could be extremely difficult or impossible. The AI examples above illustrate critical infrastructure and national security cases that are currently in discussion, but the reality could be far more complex, unexpected, and dangerous. While most AI researchers expect that safety will develop concurrently with system autonomy and complexity, there is no certainty in this proposition. Indeed, if there is even a minute chance of misalignment in a deployed AI system (or systems) in critical infrastructure or national defense it is important that researchers dedicate a portion of resources to evaluating the risks. Decision makers in government and industry must consider these risks and potential means to mitigate them before generalized AI systems are integrated into critical and national security infrastructure, because to do otherwise could lead to catastrophic failure modes that we may not be able to fully anticipate, endure, or overcome.

Disclaimer: The authors are responsible for the content of this article. The views expressed do not reflect the official policy or position of the National Intelligence University, the National Geospatial Intelligence Agency, the Department of Defense, the Office of the Director of National Intelligence, the U.S. Intelligence Community, or the U.S. Government.

Anderson, Chris. Life. In Possible Minds: Twenty-Five Ways of Looking at AI, by John Brockman, 150. New York: Penguin Books, 2019.

Avatrade Staff. The Flash Crash of 2010. Avatrade. August 26, 2021. https://www.avatrade.com/blog/trading-history/the-flash-crash-of-2010 (accessed August 24, 2022).

Baker, Bowen, et al. Emergent Tool Use From Multi-Agent Autocurricula. arXiv:1909.07528v2, 2020.

Berggren, Viktor, et al. Artificial intelligence in next-generation connected systems. Ericsson. September 2021. https://www.ericsson.com/en/reports-and-papers/white-papers/artificial-intelligence-in-next-generation-connected-systems (accessed May 3, 2022).

Bostrom, Nick. The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents. Minds and Machines 22, no. 2 (2012): 71-85.

Brown, Tom B., et al. Language Models are Few-Shot Learners. arXiv:2005.14165, 2020.

Buchanan, Ben, John Bansemer, Dakota Cary, Jack Lucas, and Micah Musser. Georgetown University Center for Security and Emerging Technology. Automating Cyber Attacks: Hype and Reality. November 2020. https://cset.georgetown.edu/publication/automating-cyber-attacks/.

Byford, Sam. AlphaGos battle with Lee Se-dol is something Ill never forget. The Verge. March 15, 2016. https://www.theverge.com/2016/3/15/11234816/alphago-vs-lee-sedol-go-game-recap (accessed August 19, 2022).

Drexler, K Eric. Reframing Superintelligence: Comprehensive AI Services as General Intelligence. Future of Humanity Institute. 2019. https://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf (accessed August 19, 2022).

Duettmann, Allison. WELCOME NEW PLAYERS | Gaming the Future. Foresight Institute. February 14, 2022. https://foresightinstitute.substack.com/p/new-players?s=r (accessed August 19, 2022).

Edison, Bill. Creating an AI red team to protect critical infrastructure. MITRE Corporation. September 2019. https://www.mitre.org/publications/project-stories/creating-an-ai-red-team-to-protect-critical-infrastructure (accessed August 19, 2022).

Etzioni, Oren. No, the Experts Dont Think Superintelligent AI is a Threat to Humanity. MIT Technology Review. September 20, 2016. https://www.technologyreview.com/2016/09/20/70131/no-the-experts-dont-think-superintelligent-ai-is-a-threat-to-humanity/ (accessed August 19, 2022).

Gary, Marcus, Ernest Davis, and Scott Aaronson. A very preliminary analysis of DALL-E 2. arXiv:2204.13807, 2022.

GCN Staff. NSF, NIST, DOD team up on resilient next-gen networking. GCN. April 30, 2021. https://gcn.com/cybersecurity/2021/04/nsf-nist-dod-team-up-on-resilient-next-gen-networking/315337/ (accessed May 1, 2022).

Jumper, John, et al. Highly accurate protein structure prediction with AlphaFold. Nature 596 (August 2021): 583589.

Kallenborn, Zachary. Swords and Shields: Autonomy, AI, and the Offense-Defense Balance. Georgetown Journal of International Affairs. November 22, 2021. https://gjia.georgetown.edu/2021/11/22/swords-and-shields-autonomy-ai-and-the-offense-defense-balance/ (accessed August 19, 2022).

Kegel, Helene. Understanding Gradient Descent in Machine Learning. Medium. November 17, 2021. https://medium.com/mlearning-ai/understanding-gradient-descent-in-machine-learning-f48c211c391a (accessed August 19, 2022).

Krakovna, Victoria. Specification gaming: the flip side of AI ingenuity. Medium. April 11, 2020. https://deepmindsafetyresearch.medium.com/specification-gaming-the-flip-side-of-ai-ingenuity-c85bdb0deeb4 (accessed August 19, 2022).

Littman, Michael L, et al. Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) Study Panel Report. Stanford University. September 2021. http://ai100.stanford.edu/2021-report (accessed August 19, 2022).

Manheim, David. Overoptimization Failures and Specification Gaming in Multi-agent Systems. Deep AI. October 16, 2018. https://deepai.org/publication/overoptimization-failures-and-specification-gaming-in-multi-agent-systems (accessed August 19, 2022).

Nguyen, Thanh Thi, and Vijay Janapa Reddi. Deep Reinforcement Learning for Cyber Security. IEEE Transactions on Neural Networks and Learning Systems. IEEE, 2021. 1-17.

Omohundro, Stephen M. The Basic AI Drives. Proceedings of the 2008 conference on Artificial General Intelligence 2008: Proceedings of the First AGI Conference. Amsterdam: IOS Press, 2008. 483492.

Panfili, Martina, Alessandro Giuseppi, Andrea Fiaschetti, Homoud B. Al-Jibreen, Antonio Pietrabissa, and Franchisco Delli Priscoli. A Game-Theoretical Approach to Cyber-Security of Critical Infrastructures Based on Multi-Agent Reinforcement Learning. 2018 26th Mediterranean Conference on Control and Automation (MED). IEEE, 2018. 460-465.

Pico-Valencia, Pablo, and Juan A Holgado-Terriza. Agentification of the Internet of Things: A Systematic Literature Review. International Journal of Distributed Sensor Networks 14, no. 10 (2018).

Pomerleu, Mark. US Army network modernization sets the stage for JADC2. C4ISRNet. February 9, 2022. https://www.c4isrnet.com/it-networks/2022/02/09/us-army-network-modernization-sets-the-stage-for-jadc2/ (accessed August 19, 2022).

Russell, Stewart. Human Compatible: Artificial Intelligence and the Problem of Control. New York: Viking, 2019.

Shah, Rohin. Reframing Superintelligence: Comprehensive AI Services as General Intelligence. AI Alignment Forum. January 8, 2019. https://www.alignmentforum.org/posts/x3fNwSe5aWZb5yXEG/reframing-superintelligence-comprehensive-ai-services-as (accessed August 19, 2022).

Shahar, Avin, and SM Amadae. Autonomy and machine learning at the interface of nuclear weapons, computers and people. In The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk, by Vincent Boulanin, 105-118. Stockholm: Stockholm International Peace Research Institute, 2019.

Trevino, Marty. Cyber Physical Systems: The Coming Singularity. Prism 8, no. 3 (2019): 4.

Turner, Alexander Matt, Logan Smith, Rohin Shah, Andrew Critch, and Prasad Tadepalli. Optimal Policies Tend to Seek Power. arXiv:1912.01683, 2021: 8-9.

Winder, Phil. Automating Cyber-Security With Reinforcement Learning. Winder.AI. n.d. https://winder.ai/automating-cyber-security-with-reinforcement-learning/ (accessed August 19, 2022).

Zeng, Andy, et al. Socratic Models: Composing Zero-Shot Multimodal Reasoning with Language. arXiv:2204.00598 (arXiv), April 2022.

Zewe, Adam. Does this artificial intelligence think like a human? April 6, 2022. https://news.mit.edu/2022/does-this-artificial-intelligence-think-human-0406 (accessed August 19, 2022).

Zwetsloot, Remco, and Allan Dafoe. Lawfare. Thinking About Risks From AI: Accidents, Misuse and Structure. February 11, 2019. https://www.lawfareblog.com/thinking-about-risks-ai-accidents-misuse-and-structure (accessed August 19, 2022).

[1] (Zewe 2022)

[2] (Littman, et al. 2021)

[3] (Jumper, et al. 2021)

[4] (Brown, et al. 2020)

[5] (Gary, Davis and Aaronson 2022)

[6] (Buchanan, et al. 2020)

[7] (Avatrade Staff 2021)

[8] (Russell 2019, 9-10)

[9] (Zwetsloot and Dafoe 2019)

[12] (Etzioni 2016)

[13] GO is an ancient Chinese strategy board game

[14] (Byford 2016)

[15] (Anderson 2019, 150)

[16] (Kegel 2021)

[17] (Krakovna 2020)

[18] (Omohundro 2008, 483-492)

[19] Ibid., 484.

[20] (Turner, et al. 2021, 8-9)

[21] (Baker, et al. 2020)

[22] Ibid.

[23] (Bostrom 2012, 71-85)

[24] (GCN Staff 2021)

[25] (Pomerleu 2022)

[26] (Berggren, et al. 2021)

[27] (Nguyen and Reddi 2021)

[28] Ibid.

[29] (Edison 2019)

[30] (Panfili, et al. 2018)

[31] (Winder n.d.)

[32] (Manheim 2018)

[33] Ibid.

[34] (Zeng, et al. 2022)

[35] (Drexler 2019, 18)

[36] Ibid.

[37] (Shah 2019)

[38] (Duettmann 2022)

[39] (Trevino 2019)

[40] (Pico-Valencia and Holgado-Terriza 2018)

Continue reading here:

Artificial Intelligence, Critical Systems, and the Control Problem - HS Today - HSToday

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence, Critical Systems, and the Control Problem – HS Today – HSToday

UW-Stevens Point to offer series on the future of artificial intelligence – Point/Plover Metro Wire

Posted: at 11:21 pm

A series of free community lectures and film screenings at the University of Wisconsin-Stevens Point will look at what may happen When Robots Rule the World.

Presented by the College of Letters and Science, the series will explore the futuristic portrayal of robots in film, the daily use of artificial intelligence (A.I.) in mundane tasks and the latest advances in the field of human-centered A.I. and its implications.

The series begins Sept. 13 and continues throughout the academic year, featuring lectures by UW-Stevens Point faculty and other experts as well as film screenings and a panel discussion. Events will take place on campus or at the Portage County Public Library and are free and open to the public. The lectures will also be available via live stream on the website, http://www.uwsp.edu/whenrobotsrule.

A lecture, Dare to be Human, kicks off the series at 7 p.m., Tuesday, Sept. 13, at The Encore in the UW-Stevens Point Dreyfus University Center (DUC). Associate Professor Vera Klekovkina, world languages and literatures, will discuss how robots could become pets, friends, confidants, and even romantic partners, and the similarities and differences between robotic and human relationships. Cro Crga Studio will also offer a creative performance.

Additional fall events include:

Human-centered A.I. is an emerging discipline that seeks to empower humans but brings up issues in privacy, equity, security, and transparency.

The series is sponsored by the University Personnel Development Committee Research and Creative Activities Grant.

Read the original post:

UW-Stevens Point to offer series on the future of artificial intelligence - Point/Plover Metro Wire

Posted in Artificial Intelligence | Comments Off on UW-Stevens Point to offer series on the future of artificial intelligence – Point/Plover Metro Wire

Artificial Intelligence-powered (AI) Spatial Biology Market Market to Record an Exponential CAGR by 2030 – Exclusive Report by InsightAce Analytic -…

Posted: at 11:21 pm

JERSEY CITY, N.J., Aug. 30, 2022 /PRNewswire/ -- InsightAce Analytic Pvt. Ltd. announces the release of market assessment report on "Global Artificial Intelligence-powered (AI) Spatial Biology Market By Data Analyzed (DNA, RNA, and Protein) By Application (Translation Research, Drug Discovery and Development, Single Cell Analysis, Cell Biology, Clinical Diagnostics, and Other Applications) Technology Trends, Industry Competition Analysis, Revenue and Forecast Till 2030"

According to the latest research by InsightAce Analytic, the global artificial intelligence-powered (AI) spatial biology market is expected to record a promising CAGR of 16.4% during the period of 2022-2030. By region, North America dominates the global market with the major contribution in terms of revenue.

Request for Sample Pages: https://www.insightaceanalytic.com/request-sample/1358

In recent years, enormous advances in biological research and automated molecular biology have been gained using artificial intelligence (AI). AI has the ability to effectively assist in specific areas in biology, which may enable novel biotechnology-derived medicines to facilitate the deployment of precision medicine approaches. It is predicted that using AI on cell-by-cell maps of gene or protein activity will lead to major inventions in spatial biology. The next significant step in the comprehension of biology might be achieved by incorporating spatially resolved data. When applied to gene expression, spatial transcriptomics (spRNA-Seq) combines the strengths of conventional histopathology with those of single-cell gene expression profiling. Mapping specific disease pathologies is made possible by linking the spatial arrangement of molecules in cells and tissues with their gene expression state. Machine learning has the ability to generate images of gene transcripts at sub-cellular resolution and decipher molecular proximities from sequencing data.

Artificial Intelligence in spatial biology has gained faster development in sequencing and analysis, drug discovery, and disease diagnosis. Increased interest in AI in spatial biology can be attributed to the widespread use of similar technologies in other sectors and the growing popularity of increased use of Artificial Intelligence. Moreover, Market expansion can also be attributed to government spending on research around the world. The increasing demand for novel analysis analytical tools and subsequent funding has resulted in the market launch of high-throughput technology. However, Despite the availability of new high-complexity spatial imaging methods, it is still challenging and labour-intensive to extract, analyze, and interpret biological information from these images.

In 2021, the market was led by North America. Technological developments, the existence of a well-established research infrastructure and key players, and increased spending in drug discovery R&D are all factors contributing to the expansion of the regional market. Due to the region's large and growing demand from research and the pharmaceutical industry, North America is currently the largest market for artificial intelligence applications in spatial omics.

The major players operating in artificial intelligence-powered (AI) spatial biology market players areNucleai, Inc., Reveal Biosciences, Inc., Alpenglow Biosciences, SpIntellx, Inc., ONCOHOST, Pathr.ai, Phenomic AI, BioTuring Inc., Indica Labs, Rebus Biosystems, Inc., Genoskin, Algorithmic Biologics, Castle Biosciences, Inc. (TissueCypher), and Other Prominent Players. The leading spatial omics solution providers are focusing on strategies like investmenst for innovations, partnerships, collaborations, mergers, and agreements with AI based service providers.

Curious about the full report? Get Report Details @ https://www.insightaceanalytic.com/enquiry-before-buying/1358

Key Developments In The Market

For More Customization @ https://www.insightaceanalytic.com/customisation/1358

Market Segments

Global Artificial Intelligence-powered (AI) Spatial Biology Market, by Data Analyzed, 2022-2030 (Value US$ Mn)

Global Artificial Intelligence-powered (AI) Spatial Biology Market, by Application, 2022-2030 (Value US$ Mn)

Global Artificial Intelligence-powered (AI) Spatial Biology Market, by Region, 2022-2030 (Value US$ Mn)

North America Artificial Intelligence-powered (AI) Spatial Biology Market, by Country, 2022-2030 (Value US$ Mn)

Europe Artificial Intelligence-powered (AI) Spatial Biology Market, by Country, 2022-2030 (Value US$ Mn)

Asia Pacific Artificial Intelligence-powered (AI) Spatial Biology Market, by Country, 2022-2030 (Value US$ Mn)

Latin America Artificial Intelligence-powered (AI) Spatial Biology Market, by Country, 2022-2030 (Value US$ Mn)

Middle East & Africa Artificial Intelligence-powered (AI) Spatial Biology Market, by Country, 2022-2030 (Value US$ Mn)

Why should buy this report:

For More Customization @ https://www.insightaceanalytic.com/customisation/1358

Other Related Reports Published by InsightAce Analytic:

Global Spatial Omics Solutions Market

Global Proteome Profiling Services Market

Global Single-Cell Bioinformatics Software and Services Market

Global Oligonucleotide Synthesis, Modification, and Purification Services Market

Global Circulating Cell-Free DNA (ccfDNA) Diagnostics Market

About Us:

InsightAce Analytic is a market research and consulting firm that enables clients to make strategic decisions. Our qualitative and quantitative market intelligence solutions inform the need for market and competitive intelligence to expand businesses. We help clients gain competitive advantage by identifying untapped markets, exploring new and competing technologies, segmenting potential markets and repositioning Data Analyzeds. Our expertise is in providing syndicated and custom market intelligence reports with an in-depth analysis with key market insights in a timely and cost-effective manner.

Contact Us:

InsightAce Analytic Pvt. Ltd.Tel.: +1 551 226 6109Email:info@insightaceanalytic.comSite Visit:www.insightaceanalytic.comFollow Us on LinkedIn @bit.ly/2tBXsgSFollow Us OnFacebook@bit.ly/2H9jnDZ

Logo: https://mma.prnewswire.com/media/1729637/InsightAce_Analytic_Logo.jpg

SOURCE InsightAce Analytic Pvt. Ltd.

Read the original:

Artificial Intelligence-powered (AI) Spatial Biology Market Market to Record an Exponential CAGR by 2030 - Exclusive Report by InsightAce Analytic -...

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence-powered (AI) Spatial Biology Market Market to Record an Exponential CAGR by 2030 – Exclusive Report by InsightAce Analytic -…

Chips-Plus Artificial Intelligence in the CHIPS Act of 2022 – JD Supra

Posted: at 11:21 pm

On August 9, 2022, President Biden signed the CHIPS Act of 2022 (the Act), legislation to fund domestic semiconductor manufacturing and boost federal scientific research and development (see our previous alert for additional background). As part of its science-backed provisions, the Act includes many of the U.S. Innovation and Competition Acts (USICA) original priorities, such as promoting standards and research and development in the field of artificial intelligence (AI) and supporting existing AI initiatives.

The Act directs the National Institute of Standards and Technology (NIST) Director to continue supporting the development of AI and data science and to carry out the National AI Initiative Act of 2020 (previous alert for additional background), which created a coordinated program across the federal government to accelerate AI research and application to support economic prosperity, national security, and advance AI leadership in the United States. The Director will further the goals of the National AI Initiative Act of 2020 by:

Furthermore, the Act provides that the Director may establish testbeds, including in virtual environments, in collaboration with other federal agencies, the private sector and colleges and universities, to support the development of robust and trustworthy AI and machine learning systems.

A new National Science Foundation (NSF) Directorate for Technology, Innovation and Partnerships (the Directorate) is established under the Act to address societal, national and geostrategic challenges for the betterment of all Americans through research and development, technology development and related solutions. Over the next five years, the new Directorate will receive $20 billion in funding. Moreover, the Directorate will focus on 10 key technology focus areas, including AI, machine learning, autonomy, related advances, robotics, automation, advanced manufacturing and quantum computing, among other areas.

Within the Department of Energy (DOE), the Act authorizes $11.2 billion for research, development and demonstration activities and to address energy-related supply chain activities in the ten key technology focus areas prioritized by the new NSF Directorate. Further, the Act authorizes $200 million for the DOEs Office of Environmental Management to conduct research, development and demonstration activities, including the fields of AI and information technology.

The Act directs NSF Director to submit to the relevant House and Senate congressional committees a report outlining the need, feasibility and plans for implementing a program for recruiting and training the next generation of AI professionals. The report will evaluate the feasibility of establishing a federal AI scholarship-for-service program to recruit and train the next generation of AI professionals.

The Akin Gump cross-practice AI team continues to actively monitor forthcoming congressional and administrative initiatives related to AI.

Follow this link:

Chips-Plus Artificial Intelligence in the CHIPS Act of 2022 - JD Supra

Posted in Artificial Intelligence | Comments Off on Chips-Plus Artificial Intelligence in the CHIPS Act of 2022 – JD Supra

Putting the ‘Art’ in Artificial Intelligence! Sify – Sify

Posted: at 11:21 pm

Ramji finds out how the aesthetics of our age are being revolutionized by the algorithmic influence of artificial intelligence

Look at this painting. Doesnt it look like an unknown work of Rembrandt? Would you believe it if I said that the painting was generated by an engine driven by artificial intelligence (AI)? And what if I say that painting was created just by carefully chosen words? Yes, words. Here is what I typed in:

Portrait of a beautiful young woman, magnificent palace, Rembrandt style lighting, hyper realistic, cinematic

They say a picture is worth a thousand words. Well, in this case, several words make up a picture. Yes, this is the new trend in AI which takes in your text inputs and generates beautiful images. Not only this, but it also gives you 4 options first and you can mix and match elements between them. You can even upscale any of the 4 options and get a bigger picture. Now look at this

Midjourney, an AI engine that lets you create such beautiful pictures with just words, is one of many platforms that are welcoming in the era of AI artistry.

So how do these AI engines work? Their algorithms work not in a set of instructions or rules, but learn to create a specific aesthetic by trawling over thousands of images and picking up elements what it thinks that matches with the set of words that you entered. Fascinating, isnt it?

The engine is trained to analyze the set of images that matches each word in the text prompt and then put together a combined image. Now that is remarkable. And soon, you could create any image with great accuracy.

It all started in 2009 when Google, in association with Mannheim University, developed an artificial neural network, an AI system that was modelled after the human brain. This computer vision program was aimed at identifying and enhancing patterns based on an existing set of data that has been fed to the system and processed. And many artists started using this to create abstract artwork using this system instead of traditional way of drawing or painting. In a way, Deep Dream paved the way for the other systems that we are talking about now.

According to an article published by Ahmed Elgammal, a professor of computer science and founder of the Art and Artificial Intelligence Laboratory at Rutgers University, these AI based engines use something called Generative Adversarial Networks (GANs) which was introduced by a scientist Ian Goodfellow in 2014.

As per this system, the algorithm has two neural networks as part of it. One is aptly called the Generator that generates random images and the other one is called Discriminator, which is taught through inputs fed by the developers. These inputs are nothing but a series of images (thousands of them) without any context that is fed into to algorithm so that it helps to learn each of these images and when it is time to generate its own image, it can judge what is best for the requirement. The input images are all fed without any label and let the algorithm decide what it wants to create.

There is more. Prof. Elgammals team at Art and Artificial Intelligence Laboratory has created something called Artificial Intelligence Creative Adversarial Network, AICAN in short. So, what does this do? It is an AI system that can create artwork on its own, with little or almost no human involvement. The artworks produced by this system are almost indistinguishable from those of human artists and have been exhibited worldwide. One such artwork was even sold for USD 16,000 (Rs 12,77,536) at an auction!

When I began to draft this article, I had heard only about Dalle E, another AI engine created by OpenAI that lets you create such images with text inputs. Look at the examples provided on their website.

But the problem was a long waiting list to test it. While I was reading more about it, I encountered something called Dall E mini created by Craiyon. This is not as accurate or detailed as Dall E but still gives you an idea of how these systems work.

Now as I started to learn more about such engines, I came across several more such AI engines called by various names, Stable Diffusion, Deep Dream, Dreamstudio and so on.

These engines all create artwork through artificial intelligence. However, all or most of them are experimental now and it does look promising how it will turn out in the immediate future. So go ahead and try any of these. Bring out the artist in you.

So, what does this mean for the future of art? These algorithms can produce new artwork as long as there are sufficient inputs to it. Someday, artists might use these algorithms to create original art or the algorithms themselves will create original art. Though this technology is still in its nascent stages, the possibilities are endless.

See the original post here:

Putting the 'Art' in Artificial Intelligence! Sify - Sify

Posted in Artificial Intelligence | Comments Off on Putting the ‘Art’ in Artificial Intelligence! Sify – Sify

Companies increasingly rely on technology-based solutions such as artificial intelligence, robots or mobile applications to fill workforce shortage -…

Posted: at 11:21 pm

The staff policies of companies around the world increasingly rely on technology to fill the workforce shortage, with almost 60% of them estimating an increase in the use of artificial intelligence (AI), robots or chatbots, while 37% foresee a more intensive collaboration with mobile app developers and providers over the next two years, according to the study Orchestrating Workforce Ecosystems, conducted by Deloitte and MIT Sloan Management Review.

Moreover, most companies consider it beneficial to organize their workforce as an ecosystem, defined as a structure relying on both internal and external collaborators, between whom multiple relationships of interdependence and complementarity are established, in order to generate added value for the organization.

Almost all the companies participating in the study (93%) claim that the so-called external employees, such as service providers, management consultants or communication agencies, fixed-term or project-based employees, including developers and technology solution providers, are already part of the organization. On the other hand, however, only 30% of companies are ready to manage a mixed structure of the workforce.

The main reasons behind the decision to turn to external labour resources are the desire to reduce costs (62%), the intention to migrate to an on-demand work model based on a variable staffing scheme (41%) or the need to attract more employees with basic skills (40%).

The results of the study indicate that the workforce can no longer be defined strictly in terms of permanent, full-time employees. The need for flexibility, increasingly evident lately, amid events that have disrupted the global economy, such as the COVID-19 pandemic or the war in Ukraine, has led companies to look for ways to add to the workforce other solutions, especially in markets where it is deficient. But employers who want to go further in this direction need to make sure that they comply with the labour laws applicable in their jurisdiction, which, from case to case, may be more permissive or more restrictive. In the particular case of Europe, attention and consideration to the new trends in the field of workforce orchestration within a company are still required as the legal framework has yet to catch up with the challenges such new practices bring, said Raluca Bontas, Partner, Global Employer Services, Deloitte Romania.

Almost half of the companies (49%) consider that the optimal staffing structure should include both internal and external collaborators, provided that the first category is dominant. At the same time, 74% of the surveyed directors believe that the effective management of external collaborators is essential for the success of their organization.

At the same time, 89% are convinced that it is important for the external workforce to be integrated into the internal one, in order to create high-performing teams. On the other hand, 83% consider that the two categories have different expectations that require distinct offers in terms of benefits, rewards or flexibility in the way of working.

The responsibility for the workforce strategy lies with the entire top management team, mainly with the CEO (45% of respondents) and the human resources director (41%), but also with the COO, the CFO, the strategy and the legal director, according to the study.

The Orchestrating Workforce Ecosystems study was conducted by Deloitte and the MIT Sloan Management Review among more than 4,000 respondents, executives working in 29 industries, from 129 countries across all continents.

View original post here:

Companies increasingly rely on technology-based solutions such as artificial intelligence, robots or mobile applications to fill workforce shortage -...

Posted in Artificial Intelligence | Comments Off on Companies increasingly rely on technology-based solutions such as artificial intelligence, robots or mobile applications to fill workforce shortage -…

Indica Labs Announces Collaboration with The Industrial Centre for Artificial Intelligence Research in Digital Diagnostics (iCAIRD) for the…

Posted: at 11:21 pm

ALBUQUERQUE, N.M., and GLASGOW, Scotland, Aug. 30, 2022 /PRNewswire/ -- Indica Labs, an industry leader in quantitative digital pathology and image management solutions, and The Industrial Centre for Artificial Intelligence Research in Digital Diagnostics (iCAIRD), announced today an agreement to collaborate on the development of an AI-based digital pathology solution for the detection of cancer within lymph nodes from colorectal surgery cases. The primary aim of the innovative research project is to develop a tool which in the future may improve the efficiency of pathology teams within the National Health Service Greater Glasgow and Clyde (NHSGGC) reporting colorectal cancer cases and the detection of metastatic cancer in lymph nodes.

Funded by a combination of Innovate UK and industrial partners, and based in Scotland, and supported by the West of Scotland Innovation Hub, iCAIRD is one of the largest healthcare AI research portfolios in the UK. A collaboration of 30 partners from across the NHS, industry, academia and technology, the program is currently delivering 35 ground-breaking AI projects across radiology and pathology, having grown from just 10 projects at the outset in 2019. The mission of iCAIRD is to establish a world-class center of excellence for implementation of artificial intelligence in digital diagnostics.

Anonymized H&E slides from NHS Greater Glasgow and Clyde's digital pathology archive will be used to train, validate and test the algorithm, which is being developed collaboratively by iCAIRD and Indica Labs. The resulting algorithm will report negative and positive lymph node status and will be compared to pathologist reports. Furthermore, positively involved lymph nodes will be categorized into metastases, micro-metastases, and individual tumor cells.

Dr. Gareth Bryson, Consultant Pathologist at NHSGGC and Clinical Director for Laboratory Medicine of iCAIRD commented on the potential value this tool will bring to the NHS: "Our belief is that AI powered decision support tools, such as the one we are working on, may help to support pathologists by improving the process' efficiency, while simultaneously increasing sensitivity in detecting small metastasis which will direct patient therapy. Colorectal cancer resections are one of the most common cancer resection specimens and a disproportionate amount of pathologist's time is utilized in screening lymph nodes."

Indica Labs, based in Albuquerque, New Mexico, offers a suite of digital pathology image analysis solutions including HALO AI, and HALO AP; both of which will be utilized by Indica Labs and iCAIRD partners for the development of AI-based pathology solutions and their evaluation in an NHS digital pathology workflow.

HALO AI uses deep learning neural networks to classify and quantify clinically significant tissue patterns and cell populations. HALO AP is a CE-IVD certified software platform for digital anatomic pathology labs that can operate as a standalone case and image management system or can be fully integrated within LIS or HIS solutions. HALO AP supports a full range of tissue-based workflows, includingAI-assisted assays, quantitative analytics, synoptic reporting,tumor boards, and secondary consults. In addition to HALO AI and HALO AP, Indica Labs recently received a CE-IVD mark for HALO Prostate AI, a deep learning-based screening tool designed to assist pathologists in identifying and grading prostate cancer in core needle biopsies that is deployed using HALO AP.

"The team at Indica Labs is excited to collaborate with iCAIRD on the development and deployment of a state-of-the-art AI tool that aims to improve diagnostic accuracy, turnaround times, and laboratory efficiency for the benefit of both pathologists and colorectal cancer patients," commented Steven Hashagen, CEO Indica Labs.

HALO AP will be evaluated within simulated digital workflows at the pathology department in NHS GGC, using iCAIRD's research environment to demonstrate interoperability with clinical systems. HALO AP will be used as a platform to deliver the new colorectal cancer algorithm. Through this collaboration, diagnostic accuracy and efficiency will be compared between existing fully digital workflows and one that applies AI through HALO AP.

About Indica Labs

Indica Labs is the world's leading provider of computational pathology software and image analysis services. Our flagship HALO and HALO AI platform facilitates quantitative evaluation of digital pathology images. HALO Link facilitates research-focused image management and collaboration while HALO AP enables collaborative clinical case review. Through a combination of precision, performance, scalability, and usability our software solutions enable pharmaceutical companies, diagnostic labs, research organizations, and Indica's own contract pharma services team to advance tissue-based research, clinical trials, and diagnostics.

About iCAIRD

iCAIRD aims to bring clinicians, health planners and industry together, facilitating collaboration between research-active clinicians and innovative SMEs to better inform clinical questions, and ultimately to solve healthcare challenges more quickly and efficiently using AI. iCAIRD is funded by Innovate UK, under the UK Research and Innovation (UKRI) Industrial Strategy Challenge Fund (ISCF) "From Data to Early Diagnosis in Precision Medicine" challenge. For more information, visit https://icaird.com/ or email info@icaird.com.

Media Contact:

Kate Lillard TunstallIndica Labs, Inckate@indicalab.com

View original content to download multimedia:https://www.prnewswire.com/news-releases/indica-labs-announces-collaboration-with-the-industrial-centre-for-artificial-intelligence-research-in-digital-diagnostics-icaird-for-the-development-of-an-ai-based-algorithm-for-the-automated-reporting-of-lymph-node-status-in-c-301614147.html

SOURCE Indica Labs

Read more:

Indica Labs Announces Collaboration with The Industrial Centre for Artificial Intelligence Research in Digital Diagnostics (iCAIRD) for the...

Posted in Artificial Intelligence | Comments Off on Indica Labs Announces Collaboration with The Industrial Centre for Artificial Intelligence Research in Digital Diagnostics (iCAIRD) for the…