Trump Returns to Rally Team MAGA – The Atlantic

WILKES-BARRE, Pa.Donald Trumps rally on Saturday night was his first major public appearance since the FBI searched his Florida homeand you could tell. A kind of manic, vengeful energy circulated among the throngs of supporters in the blue stadium seats at the Mohegan Sun Arena. Fans wore T-shirts reading YOU RAIDED THE WRONG PRESIDENT and THREAT TO DEMOCRACY, in a reference to President Joe Bidens speech last week in Philadelphia. The audience of thousands screamed in agreement when Representative Marjorie Taylor Greene, whos become a regular warm-up act at these rallies, declared that the FBI had violated our presidents rights. And later on, the crowd exploded into one resounding, ricocheting jeer when Trump, finally on stage, addressed the matter himself.

There can be no more real example of the very clear threats to American freedom than just a few weeks ago, the former president said, when we witnessed one of the most shocking abuses of power we have witnessed from any administration in American history!

Trump is back at the forefront of American politics, just two months ahead of the midterm elections. This time, the former president is in a strange new position: Hes backed into a corner by legal trouble. And his ever-loyal fans have joined him in a defensive crouch. We came because of the Mar-a-Lago raid, Mike Rutherford, a truck driver from East Stroudsburg, told me. He sat near the stage in a folding chair alongside his wife, Pat. Were here to support him, Pat said, nodding. I cant believe how brave that man is.

Pennsylvania found itself smack-dab in the eye of the midterms hurricane this week. Trumps rally was intended to give a boost to the flagging campaigns of the gubernatorial candidate and State Senator Doug Mastriano and the Senate candidate Mehmet Oz, both of whom have endorsed Trumps election lies and received his endorsement in exchange. Just two days ago, Biden spoke 100 miles to the south before an eerily lit Independence Hall, and was more direct in his warnings than hes been in previous addresses: Donald Trump and the MAGA Republicans represent an extremism that threatens the very foundations of our Republic. The Darth Vader optics of his speech may have interfered with its intended effect, but Trump and the candidates hes endorsed are a threat to democracy because they appear to believe in only two kinds of election outcomes: Either they win or the system is rigged.

David Frum: The justification for Bidens speech

Pennsylvania has become a hub for Stop the Steal candidates thanks, in part, to Mastriano, who spoke ahead of Trump on Saturday night. The Republican state senator and former Army colonel was outside the Capitol when rioters broke in on January 6; he helped lead the state efforts to overturn the presidential election in 2020; and hes been subpoenaed by the January 6 committee for his alleged involvement in organizing an alternate set of Electoral College electors for Trump. (Last week, Mastriano sued the panel to avoid testifying.)

Both he and Oz offered versions of their stump speeches and declared solidarity with their party leader in his moment of need on Saturday. Other headliners included Greene, the Georgia representative whod descended the arena steps earlier in the afternoon as Shes a Beauty by the Tubes played over the loudspeakers, and the Pennsylvania congressional candidate Jim Bognet, who quipped that America should hire 87,000 more border patrol agents, not IRS agents!

When Trump emerged shortly after 7 p.m., backed by the usual Lee Greenwood soundtrack, he meandered through his standard repertoire: the Russia investigation hoax, Bidens failures, the death penalty for drug dealers. He even managed to encourage a mass heckling of the press seated in the back of the stadium on at least five occasions. But it was Trumps FBI comments that got the crowd most riled up. The FBI and the Justice Department have become vicious monsters, controlled by radical-left scoundrels, lawyers, and the media, who tell them what to do, he told them. Audience members whooped, and a few shouted out Defund the FBI!

The Trumps fans Id spoken with earlier, standing near the Dippin Dots ice-cream stall and in line for Chickies & Petes chicken cutlets, all had his back. Its politically motivated, Jim Shaw, a barber from New Milford, told me when I asked what he made of the search at Mar-a-Lago. If Donald Trump wasnt looking like he was the [leading] Republican candidate for president, I dont think it would have happened. Every one of the dozen or so people I talked with offered some defense of the former president: The search was a setup; the evidence was planted; Bidens DOJ was trampling on Trumps constitutional rights to keep him from running for office again.

Read: Stop the Steal is a metaphor

I detected a touch of desperation in many peoples responsesa sense that, if Trump-endorsed candidates dont win in November, America as they know it will cease to exist. Here in northeast Pennsylvaniajust 20 miles down the road from Bidens hometownwas a gathering of people not just pessimistic about the future of the country under his leadership, but deeply fearful too. At this point right now, Im worried about being targeted by the FBI because Im a Christian, Im conservative, Pat Rutherford said. I know they wont find anything, but I am going to need a lawyer to prove I am innocent. The DOJ is like a militia for the Democrats, Linda Hess, from Selinsgrove, told me. I think our First Amendment rights are basically gone as conservatives. I really do.

Trump and his loyalists are eager to fan these fears. Your president called all of you extremists! Greene told the rally when she was on stage. Joe Biden has declared that half of this country are enemies of the state! (The president, in fact, made a clear distinction: Not every Republican, not even the majority of Republicans, are MAGA Republicans.) Save us, Trump! one woman yelled from the crowd during his speech.

Fear can be a winning political tactic. It helped candidates like Mastriano sail to victory in the Republican primary. But general elections are different. The presidents party usually fares poorly in the midterms cycle, and just a few weeks ago, the fundamentals would have indicated that Republicans were about to have an excellent November. Recently, though, the numbers have shifted in the Democrats favor. Inflation is down, and so are gas prices; new job numbers are high, and unemployment is still low; and Democrats are already seeing signs that their voters are highly motivated by the overturning of Roe v. Wade. In the latest polls, both Mastriano and Oz are trailing their respective Democratic opponents, Josh Shapiro and John Fetterman.

Still, 10 weeks is a long time in American politics. Republicans could gain back an edge between now and then. Some experts predict that both races will probably end up much closer than they are now. The risks of electing an election denier such as Mastriano are clear: As governor, hed have the power to appoint the secretary of state, and together, the two officials could muddy the waters after a close election or, allied with the Republican-dominated state legislature, even change election rules to benefit their party.

That danger extends far beyond the Keystone State. Other Stop the Steal candidates are running all over the country. In 2020 battleground states, candidates whove endorsed Trumps lies about election fraud have won nearly two-thirds of GOP nominations for state and federal offices with election-oversight powers, according to a Washington Post analysis.

Read: The radical fringe that just went mainstream in Arizona

Whether these specific candidates win or lose, election denial has become the most important litmus test for the MAGA base. Stop the Steal is an expression of a deepening distrust in government and institutionsa mantra to remind its adherents that they, not their political opponents, are the rightful inheritors of America. The phrase is a metaphor, the sociologist Theda Skocpol told me last month, for the country being taken away from the people who think they should rightfully be setting the tone.

When their candidates lose, it can be only through trickery. When their leader is investigated for squirreling away cartons of national secrets at his country club, its a targeted attack by the Regime, to use Florida Republican Governor Ron DeSantiss wordand capitalization.

After Mastriano had finished speaking, and before Trump took to the stage, an elderly white man stood up behind me and shouted, Whose country is this? The people nearby in the bleachers joined him in response: Its our country! Later, Trump affirmed the sentiment. No matter how big or powerful these corrupt radicals may be, you must never forget that this nation does not belong to them, he told his supporters. This nation belongs to you! The people in the stadium roared their approval.

Go here to read the rest:

Trump Returns to Rally Team MAGA - The Atlantic

Deplatforming Andrew Tate is the best way to deal with him – Los Angeles Loyolan

For the last week, the internet has been teeming with conversation about the sudden decision by major social media platforms like Instagram, Twitter, TikTok and YouTube to ban Andrew Tate, a former kickboxer and TV star turned internet personality. Having blown up on TikTok because of his absurdlyinflammatory commentson women, masculinity and sexuality, Tate quickly flew afoul of the terms of service and safe use policies of these social media platforms, prompting this rare act of consistent enforcement across this many sites.

Tate has been affectionately dubbed The King of Toxic Masculinity on his journey to his latest bout of stardom from riding a wave of misogyny andalt-right talking points, tosubverting ongoing investigationsinto him for sex trafficking and rape byliving in Romania, and topping it off with aclever scamthat he pushed onto his doting fans.

Of course, a cursory Google search couldve filled in those blanks, so the real debate is actually not the validity of his ban. Any one of his flagrantly insane comments justifies removing him from a platform (like saying women couldn't drive his car because they "have no innate responsibility or honor"), but the real debate is the effectiveness of banning him. Does widespread deplatforming of a toxic ideology effectively cull its reach, or would it be more effective to allow for contention with those beliefs in the public sphere? Should we let people like Andrew Tate stay on social media so other people can prove him wrong in front of his impressionable audience?

He tries to seem like an overly masculine man ... he feels like men should conform to this idea of being strong, of being emotionally distant and unavailable, said sophomore environmental studies major Melissa Johnson, who said that despite heavily disagreeing with Tates ideas, he appeared all over her TikTok For You page for days. It was mostly duets of people supporting him and debunking every reason, every positive thing people have to say [about Tate]" she recalled, showing how Tate's messages spread like wildfire, even among groups who disagreed with him.

Kaila Uyemura, a freshman studio arts major, recounted a similar experience when initially encountering Tates ideologies. She said that the people who disagreed with him were helping him grow because his supporters would flood the comments of people going against him, [using] their big platforms, showing up on everyones For You Pages." She further noted that she thought deplatforming would help stop this unintentional amplification of his voice.

When asked about his thoughts on Tate, freshman economics major Cole Dudley said, "Hes not a positive impact on a lot of peoples lives. I know that hes a big influence on young people that view him on social media ... I know a lot of people from my high school that follow him. He kind of has a cult following of young guys. In Dudley's view, Tate appeals to a certain group of young men who lack confidence in themselves by mixing misogyny with more innocuous, basic advice for self-confidence.

The Guardian conductedan investigationinto the method by which Tate blew up and his strategies for growth, finding that Tates content exploited common insecurities among teenage boys, especially in their romantic pursuits. It concluded that Tate found loyal followers through algorithmic assistance, especially on TikTok, which was primarily pointing young men Tates primary demographic in his direction by filling their feeds with his content.

Those followers were then directed to intentionally spread further controversy by reposting his most controversial clips and commenting actively about him trying to bait action out of the side that disagreed with their rampant misogyny in order to get more eyes on the topic. This discovery of intentional algorithmic manipulation lines up exactly with the experiences of Uyemura and Johnson when initially encountering Tates content.

Ive even had this experience while browsing YouTube Shorts Ive scrolled across videos of Tate and people like him discussing masculinity or advising me that women would like me if I maintained emotional distance and kept myself traditionally masculine. The bombardment of young men by the mouth of this pipeline isviscerally effective. Like Dudley, Ive seen friends and acquaintances fall down this hole, making this sort of open discussion even more important in my view.

In the end, how effective will this deplatforming initiative actually be? The experiences of our students serve as perfect microcosms of the brand strategy outlined by the Guardian study, since they directly experienced the fallout from the intentional algorithmic manipulation by Tate's fanbase. His obviously toxic ideology was spurred on by the people who were trying to contend with his ideas in the public sphere. By trying to limit the number of people drawn in by his cohort, good people were unintentionally extending Tate's reach.

This ban is designed to cull his sphere of influence, because he was using inflammatory interactions to force himself into the conversation, naturally snowballing his presence by having such high interactions. After all, when his quotes about women being objects are so frequently shared, each clip will generate a ton of commentary, content and viewership. By trying to argue publicly against this kind of blatant bigotry, we only give them the shred of credibility and attention that they desire.

This is the opinion ofArsh Goyal, a freshman economics major from Dublin, Calif. Email comments toeditor@theloyolan.com. Follow and tweet comments to@LALoyolanon Twitter, and likethe Loyolanon Facebook.

View original post here:

Deplatforming Andrew Tate is the best way to deal with him - Los Angeles Loyolan

Artificial Intelligence in Diagnostics Market: Increasing Utilization of AI in Different Medical Care Fields to Drive the Market – BioSpace

Wilmington, Delaware, United States, Transparency Market Research Inc. The increasing need for improving patient care and reducing treatment costs is a primary factor augmenting the growth of global artificial intelligence in the diagnostics market. The growing adoption and popularity of artificial intelligence in clinical imaging brings about quicker judgments and decreased mistakes when contrasted with a conventional examination of pictures delivered by X-beams and MRIs. Simulated intelligence brings more abilities to most diagnostics, including malignancy screening and chest CT tests pointed toward detecting COVID-19.

The global artificial intelligence in the diagnostics market is classified based on component, diagnosis type, and region. In terms of components, the market is divided into three parts namely, services, hardware, and software. Based on classification by diagnosis type, the market is grouped into neurology, chest and lung, radiology, pathology, oncology, cardiology, and others.

Request Brochure of Report - https://www.transparencymarketresearch.com/sample/sample.php?flag=B&rep_id=82839

The report provides an in-depth analysis of the global artificial intelligence in the diagnostics market and emphasizes the prime growth trajectories. Besides this, the report also highlights the impact of the novel COVID19 pandemic on this market and how revenues can be drawn for this market in the coming years. The report also discusses the table of segmentation in detail and lists the names of the leading segments and players functioning in this market. The report is available for sale on the company website.

Artificial Intelligence in Diagnostics Market: Company Profile

Companies operating in the global artificial intelligence in the diagnostics market are indulging in joint ventures and collaborative efforts to gain an upper hand in the overall market competition. Apart from this, players are also investing in the research and development of better therapeutics via artificial intelligence and machine learning to gain an upper hand in the overall market competition.

Request for ToC https://www.transparencymarketresearch.com/sample/sample.php?flag=T&rep_id=82839

Some of the prominent players of the global artificial intelligence in the diagnostics market include:

Artificial Intelligence in Diagnostics Market: Notable Developments

Clalit Health Services and Zebra Medical Vision entered into a strategic partnership for the development of cloud-based imaging AI to serve large-scale HMOs in November 2020.

Artificial Intelligence in Diagnostics Market: Trends and Opportunities

Increasing utilization of AI in different medical care fields, including diagnostics and the rising commonness of constant sicknesses, is a portion of the key variables driving the reception of artificial intelligence in diagnostics, is bolstering growth. Likewise, the growing deficiency of the general wellbeing labor force is further supporting the development and reception of innovation-based answers for better persistent administration and analysis.

The rising interest for reducing the expense of determination, reducing machine personal time, and enhancing patient consideration is a portion of the key elements propelling the utilization of AI-based analytic arrangements. Also, increasing interest and need for financially savvy symptomatic advances and methods, speedy demonstrative information age and solidification, and proficient report investigation are a couple of different variables expected to drive the development of this market. This has prompted the improvement of AI arrangements that would to cater these growing necessities.

Make an Enquiry Before Buying https://www.transparencymarketresearch.com/sample/sample.php?flag=EB&rep_id=82839

Artificial Intelligence in Diagnostics Market: Regional Analysis

Geographically, North America is holding the largest share in the global artificial intelligence in the diagnostics market on account of the presence of an established healthcare infrastructure facility and the latest medical aid. The presence of innovative diagnostic software and the rising adoption of IT healthcare solutions are factors augmenting the growth of this region. Besides this, the growing popularity of AI-driven surgeries and minimally invasive operations are also expected to help this region continue dominating the market in the coming years.

More Trending Reports by Transparency Market Research

Respiratory Panel Testing Market: The Europe and Middle East & Africa respiratory panel testing market is expected to surpass the value of US$ 3.1 Bn by the end of 2031.

Direct-to-Consumer Laboratory Testing Market: The direct-to-consumer laboratory testing market in North America is projected to expand at a CAGR of 22.6% during the forecast period.

Biobanking Market: The biobanking market is projected to expand at a CAGR of 5.9% during the forecast period from 2022 to 2031.

Cervical Cancer Diagnostic Tests Market: The global cervical cancer diagnostic tests market is expected to exceed value of US$ 13.6 Bn by the end of 2031.

Central Lab Market: The global central lab market is expected to reach the value of US$ 4 Bn by the end of 2031.

In Vitro Diagnostics Market: The global in vitro diagnostics market is expected to reach the value of US$ 115.43 Bn by the end of 2028.

ASRs and RUOs Market: The U.S. ASRs and RUOs market is expected to surpass the value of US$ 5.3 Bn by the end of 2031.

Companion Diagnostics Market: The global companion diagnostics market is expected to reach the value of US$ 9.30 Bn by the end of 2028.

About Us

Transparency Market Research, a global market research company registered at Wilmington, Delaware, United States, providescustom research and consulting services. Our exclusive blend of quantitative forecasting and trends analysis provides forward-looking insights for thousands of decision makers. Our experienced team of Analysts, Researchers, and Consultants use proprietary data sources and various tools & techniques to gather and analyze information.

Our data repository is continuously updated and revised by a team of research experts, so that it always reflects the latest trends and information. With a broad research and analysis capability, Transparency Market Research employs rigorous primary and secondary research techniques in developing distinctive data sets and research material for business reports.

For More Research Insights on Leading Industries, Visit Our YouTube Channel and hit subscribe for Future Update -https://www.youtube.com/channel/UC8e-z-g23-TdDMuODiL8BKQ

Contact

Rohit BhiseyTransparency Market Research Inc.CORPORATE HEADQUARTER DOWNTOWN,1000 N. West Street,Suite 1200, Wilmington, Delaware 19801 USATel: +1-518-618-1030USA Canada Toll Free: 866-552-3453Website: https://www.transparencymarketresearch.com

Blog: https://tmrblog.com

Email: sales@transparencymarketresearch.com

View original post here:
Artificial Intelligence in Diagnostics Market: Increasing Utilization of AI in Different Medical Care Fields to Drive the Market - BioSpace

The air force industry found it harder to fill artificial intelligence vacancies in Q2 2022 – Airforce Technology

Artificial intelligence related jobs that were closed during Q2 2022 had been online for an average of 30 days when they were taken offline.

This was an increase compared to the equivalent figure a year earlier, indicating that the required skillset for these roles has become harder to find in the past year.

Artificial intelligence is one of the topics that GlobalData, our parent company and from whom the data for this article is taken, have identified as being a key disruptive technology force facing companies in the coming years. Companies that excel and invest in these areas now are thought to be better prepared for the future business landscape and better equipped to survive unforeseen challenges.

On a regional level, these roles were hardest to fill in the Middle East and Africa, with related jobs that were taken offline in Q2 2022 having been online for an average of 31 days.

The next most difficult place to fill these roles was found to be North America, while Europe was in third place.

At the opposite end of the scale, jobs were filled fastest in Asia-Pacific, with adverts taken offline after ten days on average.

While the air force industry found it harder to fill these roles in the latest quarter, these companies actually found it easier to recruit artificial intelligence jobs than the wider market, with ads online for 25% less time on average compared to similar jobs across the entire jobs market.

GlobalData's job analytics database tracks the daily hiring patterns of thousands of companies across the world, drawing in jobs as they're posted and tagging them with additional layers of data on everything from the seniority of each position to whether a job is linked to wider industry trends.

You can keep track of the latest data from this database as it emerges by visiting our live dashboard here.

International Defence and Security Exhibition

Improved Material Composites for Better Defense

Brushless Fans, Motors and Blowers

More here:
The air force industry found it harder to fill artificial intelligence vacancies in Q2 2022 - Airforce Technology

Artificial Intelligence, Critical Systems, and the Control Problem – HS Today – HSToday

Artificial Intelligence (AI) is transforming our way of life from new forms of social organization and scientific discovery to defense and intelligence. This explosive progress is especially apparent in the subfield of machine learning (ML), where AI systems learn autonomously by identifying patterns in large volumes of data.[1] Indeed, over the last five years, the fields of AI and ML have witnessed stunning advancements in computer vision (e.g., object recognition), speech recognition, and scientific discovery.[2], [3], [4], [5] However, these advances are not without risk as transformative technologies are generally accompanied by a significant risk profile, with notable examples including the discovery of nuclear energy, the Internet, and synthetic biology. Experts are increasingly voicing concerns over AI risk from misuse by state and non-state actors, principally in the areas of cybersecurity and disinformation propagation. However, issues of control for example, how advanced AI decision-making aligns with human goals are not as prominent in the discussion of risk and could ultimately be equally or more dangerous than threats from nefarious actors. Modern ML systems are not programmed (as programming is typically understood), but rather independently developed strategies to complete objectives, which can be mis-specified, learned incorrectly, or executed in unexpected ways. This issue becomes more pronounced as AI becomes more ubiquitous and we become more reliant on AI decision-making. Thus, as AI is increasingly entwined through tightly coupled critical systems, the focus must expand beyond accidents and misuse to the autonomous decision processes themselves.

The principal mid- to long-term risks from AI systems fall into three broad categories: risks of misuse or accidents, structural risks, and misaligned objectives. The misuse or accident category includes things such as AI-enabled cyber-attacks with increased speed and effectiveness or the generation and distribution of disinformation at scale.[6] In critical infrastructures, AI accidents could manifest as system failures with potential secondary and tertiary effects across connected networks. A contemporary example of an AI accident is the New York Stock Exchange (NYSE) Flash Crash of 2010, which drove the market down 600 points in 5 minutes.[7] Such rapid and unexpected operations from algorithmic trading platforms will only increase in destructive potential as systems increase in complexity, interconnectedness, and autonomy.

The structural risks category is concerned with how AI technologies shape the social and geopolitical environment in which they are deployed. Important contemporary examples include the impact of social media content selection algorithms on political polarization or uncertainty in nuclear deterrence and the offense-to-defense balance.[8],[9] For example, the integration of AI into critical systems, including peripheral processes (e.g., command and control, targeting, supply chain, and logistics), can degrade multilateral trust in deterrence.[10] Indeed, increasing autonomy in all links of the national defense chain, from decision support to offensive weapons deployment, compounds the uncertainty already under discussion with autonomous weapons.[11]

Misaligned objectives is another important failure mode. Since ML systems develop independent strategies, a concern is that the AI systems will misinterpret the correct objectives, develop destructive subgoals, or complete them in an unpredictable way. While typically grouped together, it is important to clarify the differences between a system crash and actions executed by a misaligned AI system so that appropriate risk mitigation measures can be evaluated. Understanding the range of potential failures may help in the allocation of resources for research on system robustness, interpretability, or AI alignment.

At its most basic level, AI alignment involves teaching AI systems to accurately capture what we want and complete it in a safe and ethical manner. Misalignment of AI systems poses the highest downside risk of catastrophic failures. While system failures by themselves could be immensely damaging, alignment failures could include unexpected and surprising actions outside the systems intent or window of probability. However, ensuring the safe and accurate interpretation of human objectives is deceptively complex in AI systems. On the surface, this seems straightforward, but the problem is far from obvious with unimaginably complex subtleties that could lead to dangerous consequences.

In contrast with nuclear weapons or cyber threats, where the risks are more obvious, risks from AI misalignment can be less clear. These complexities have led to misinterpretation and confusion with some attributing the concerns to disobedient or malicious AI systems.[12] However, the concerns are not that AI will defy its programming but rather that it will follow the programming exactly and develop novel, unanticipated solutions. In effect, the AI will pursue the objective accurately but may yield an unintended, even harmful, consequence. Googles Alpha Go program, which defeated the world champion Go[13] player in 2016, provides an illustrative example of the potential for unexpected solutions. Trained on millions of games, Alpha Gos neural network learned completely unexpected actions outside of the human frame of reference.[14] As Chris Anderson explains, what took the human brain thousands of years to optimize Googles Alpha Go completed in three years, executing better, almost alien solutions that we hadnt even considered.[15] This novelty illustrates how unpredictable AI systems can be when permitted to develop their own strategies to accomplish a defined objective.

To appreciate how AI systems pose these risks, by default, it is important to understand how and why AI systems pursue objectives. As described, ML is designed not to program distinct instructions but to allow the AI to determine the most efficient means. As learning progresses, the training parameters are adjusted to minimize the difference between the pursued objective and the actual value by incentivizing positive behavior (known as reinforcement learning, or RL).[16],[17] Just as humans pursue positive reinforcement, AI agents are goal-directed entities, designed to pursue objectives, whether the goal aligns with the original intent or not.

Computer science professor Steve Omohundro illustrates a series of innate AI drives that systems will pursue unless explicitly counteracted.[18] According to Omohundro, distinct from programming, AI agents will strive to self-improve, seek to acquire resources, and be self-protective.[19] These innate drives were recently demonstrated experimentally, where AI agents tend to seek power over the environment to achieve objectives most efficiently.[20] Thus, AI agents are naturally incentivized to seek out useful resources to accomplish an objective. This power-seeking behavior was reported by Open AI, where two teams of agents, instructed to play hide-and-seek in a simulated environment, proceeded to horde objects from the competition in what Open AI described as tool use distinct from the actual objective.[21] The AI teams learned that the objects were instrumental in completing the objective.[22] Thus, a significant concern for AI researchers is the undefined instrumental sub-goals that are pursued to complete the final objective. This tendency to instantiate sub-goals is coined the instrumental convergence thesis by Oxford philosopher Nick Bostrom. Bostrom postulated that intermediate sub-goals are likely to be pursued by an intelligent agent to complete the final objective more efficiently.[23] Consider an advanced AI system optimized to ensure adequate power between several cities. The agent could develop a sub-goal of capturing and redirecting bulk power from other locations to ensure power grid stability. Another example is an autonomous weapons system designed to identify targets that develop a unique set of intermediate indicators to determine the identity and location of the enemy. Instrumental sub-goals could be as simple as locking a computer-controlled access door or breaking traffic laws in an autonomous car, or as severe as destabilizing a regional power grid or nuclear power control system. These hypothetical and novel AI decision processes raise troubling questions in the context of conflict or safety of critical systems. The range of possible AI solutions are too large to consider and can only get more consequential as systems become more capable and complex. The effect of AI misalignment could be disastrous if the AI discovers an unanticipated optimal solution to a problem that results in a critical system becoming inoperable or yielding a catastrophic result.

While the control problem is troubling by itself, the integration of multiagent systems could be far more dangerous and could lead to other (as of now unanticipated) failure modes between systems. Just like complex societies, complex agent communities could manifest new capabilities and emergent failure modes unique to the complex system. Indeed, AI failures are unlikely to happen in isolation and the roadmap for multiagent AI environments is currently underway in both the public and private sectors.

Several U.S. government initiatives for next-generation intelligent networks include adaptive learning agents for autonomous processes. The Armys Joint All-Domain Command and Control (JADC2) concept for networked operations and the Resilient and Intelligent Next-Generation Systems (RINGS) program, put forth by the National Institute of Standards and Technology (NIST), are two notable ongoing initiatives.[24], [25] Literature on cognitive Internet of Things (IoT) points to the extent of autonomy planned for self-configuring, adaptive AI communities and societies to steer networks through managing user intent, supervision of autonomy, and control.[26] A recent report from the worlds largest technical professional organization, IEEE, outlines the benefits of deep reinforcement learning (RL) agents for cyber security, proposing that, since RL agents are highly capable of solving complex, dynamic, and especially high-dimensional problems, they are optimal for cyber defense.[27] Researchers propose that RL agents be designed and released autonomously to configure the network, prevent cyber exploits, detect and counter jamming attacks, and offensively target distributed denial-of-service attacks.[28] Other researchers submitted proposals for automated penetration-testing, the ability to self-replicate the RL agents, while others propose cyber-red teaming autonomous agents for cyber-defense.[29], [30], [31]

Considering the host of problems discussed from AI alignment, unexpected side effects, and the issue of control, jumping headfirst into efforts that give AI meaningful control over critical systems (such as the examples described above) without careful consideration of the potential unexpected (or potentially catastrophic) outcomes does not appear to be the appropriate course of action. Proposing the use of one autonomous system in warfare is concerning but releasing millions into critical networks is another matter entirely. Researcher David Manheim explains that multiagent systems are vulnerable to entirely novel risks, such as over-optimization failures, where optimization pressure allows individual agents to circumvent designed limits.[32] As Manheim describes, In many-agent systems, even relatively simple systems can become complex adaptive systems due to agent behavior.[33] At the same time, research demonstrates that multiagent environments lead to greater agent generalization, thus reducing the capability gap that separates human intelligence from machine intelligence.[34] In contrast, some authors present multiagent systems as a viable solution to the control problem, with stable, bounded capabilities, and others note the broad uncertainty and potential for self-adaptation and mutation.[35] Yet, the author admits that there are risks and the multiplicative growth of RL agents could potentially lead to unexpected failures, with the potential for the manifestation of malignant agential behaviors.[36],[37] AI researcher Trent McConaughy highlights the risk from adaptive AI systems, specifically decentralized autonomous organizations (DAO) in blockchain networks. McConaughy suggests that rather than a powerful AI system taking control of resources, as is typically discussed, the situation may be far more subtle where we could simply hand over global resources to self-replicating communities of adaptive AI systems (e.g., Bitcoins increasing energy expenditures that show no sign of slowing).[38]

Advanced AI capabilities in next-generation networks that dynamically reconfigure and reorganize network operations hold undeniable risks to security and stability.[39],[40] A complex landscape of AI agents, designed to autonomously protect critical networks or conduct offensive operations, would invariably need to develop subgoals to manage the diversity of objectives. Thus, whether individual systems or autonomous collectives, the web of potential failures and subtle side-effects could unleash unpredictable dangers leading to catastrophic second- and third-order effects. As AI systems are currently designed, understanding the impact of the subgoals (or even their existence) could be extremely difficult or impossible. The AI examples above illustrate critical infrastructure and national security cases that are currently in discussion, but the reality could be far more complex, unexpected, and dangerous. While most AI researchers expect that safety will develop concurrently with system autonomy and complexity, there is no certainty in this proposition. Indeed, if there is even a minute chance of misalignment in a deployed AI system (or systems) in critical infrastructure or national defense it is important that researchers dedicate a portion of resources to evaluating the risks. Decision makers in government and industry must consider these risks and potential means to mitigate them before generalized AI systems are integrated into critical and national security infrastructure, because to do otherwise could lead to catastrophic failure modes that we may not be able to fully anticipate, endure, or overcome.

Disclaimer: The authors are responsible for the content of this article. The views expressed do not reflect the official policy or position of the National Intelligence University, the National Geospatial Intelligence Agency, the Department of Defense, the Office of the Director of National Intelligence, the U.S. Intelligence Community, or the U.S. Government.

Anderson, Chris. Life. In Possible Minds: Twenty-Five Ways of Looking at AI, by John Brockman, 150. New York: Penguin Books, 2019.

Avatrade Staff. The Flash Crash of 2010. Avatrade. August 26, 2021. https://www.avatrade.com/blog/trading-history/the-flash-crash-of-2010 (accessed August 24, 2022).

Baker, Bowen, et al. Emergent Tool Use From Multi-Agent Autocurricula. arXiv:1909.07528v2, 2020.

Berggren, Viktor, et al. Artificial intelligence in next-generation connected systems. Ericsson. September 2021. https://www.ericsson.com/en/reports-and-papers/white-papers/artificial-intelligence-in-next-generation-connected-systems (accessed May 3, 2022).

Bostrom, Nick. The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents. Minds and Machines 22, no. 2 (2012): 71-85.

Brown, Tom B., et al. Language Models are Few-Shot Learners. arXiv:2005.14165, 2020.

Buchanan, Ben, John Bansemer, Dakota Cary, Jack Lucas, and Micah Musser. Georgetown University Center for Security and Emerging Technology. Automating Cyber Attacks: Hype and Reality. November 2020. https://cset.georgetown.edu/publication/automating-cyber-attacks/.

Byford, Sam. AlphaGos battle with Lee Se-dol is something Ill never forget. The Verge. March 15, 2016. https://www.theverge.com/2016/3/15/11234816/alphago-vs-lee-sedol-go-game-recap (accessed August 19, 2022).

Drexler, K Eric. Reframing Superintelligence: Comprehensive AI Services as General Intelligence. Future of Humanity Institute. 2019. https://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf (accessed August 19, 2022).

Duettmann, Allison. WELCOME NEW PLAYERS | Gaming the Future. Foresight Institute. February 14, 2022. https://foresightinstitute.substack.com/p/new-players?s=r (accessed August 19, 2022).

Edison, Bill. Creating an AI red team to protect critical infrastructure. MITRE Corporation. September 2019. https://www.mitre.org/publications/project-stories/creating-an-ai-red-team-to-protect-critical-infrastructure (accessed August 19, 2022).

Etzioni, Oren. No, the Experts Dont Think Superintelligent AI is a Threat to Humanity. MIT Technology Review. September 20, 2016. https://www.technologyreview.com/2016/09/20/70131/no-the-experts-dont-think-superintelligent-ai-is-a-threat-to-humanity/ (accessed August 19, 2022).

Gary, Marcus, Ernest Davis, and Scott Aaronson. A very preliminary analysis of DALL-E 2. arXiv:2204.13807, 2022.

GCN Staff. NSF, NIST, DOD team up on resilient next-gen networking. GCN. April 30, 2021. https://gcn.com/cybersecurity/2021/04/nsf-nist-dod-team-up-on-resilient-next-gen-networking/315337/ (accessed May 1, 2022).

Jumper, John, et al. Highly accurate protein structure prediction with AlphaFold. Nature 596 (August 2021): 583589.

Kallenborn, Zachary. Swords and Shields: Autonomy, AI, and the Offense-Defense Balance. Georgetown Journal of International Affairs. November 22, 2021. https://gjia.georgetown.edu/2021/11/22/swords-and-shields-autonomy-ai-and-the-offense-defense-balance/ (accessed August 19, 2022).

Kegel, Helene. Understanding Gradient Descent in Machine Learning. Medium. November 17, 2021. https://medium.com/mlearning-ai/understanding-gradient-descent-in-machine-learning-f48c211c391a (accessed August 19, 2022).

Krakovna, Victoria. Specification gaming: the flip side of AI ingenuity. Medium. April 11, 2020. https://deepmindsafetyresearch.medium.com/specification-gaming-the-flip-side-of-ai-ingenuity-c85bdb0deeb4 (accessed August 19, 2022).

Littman, Michael L, et al. Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) Study Panel Report. Stanford University. September 2021. http://ai100.stanford.edu/2021-report (accessed August 19, 2022).

Manheim, David. Overoptimization Failures and Specification Gaming in Multi-agent Systems. Deep AI. October 16, 2018. https://deepai.org/publication/overoptimization-failures-and-specification-gaming-in-multi-agent-systems (accessed August 19, 2022).

Nguyen, Thanh Thi, and Vijay Janapa Reddi. Deep Reinforcement Learning for Cyber Security. IEEE Transactions on Neural Networks and Learning Systems. IEEE, 2021. 1-17.

Omohundro, Stephen M. The Basic AI Drives. Proceedings of the 2008 conference on Artificial General Intelligence 2008: Proceedings of the First AGI Conference. Amsterdam: IOS Press, 2008. 483492.

Panfili, Martina, Alessandro Giuseppi, Andrea Fiaschetti, Homoud B. Al-Jibreen, Antonio Pietrabissa, and Franchisco Delli Priscoli. A Game-Theoretical Approach to Cyber-Security of Critical Infrastructures Based on Multi-Agent Reinforcement Learning. 2018 26th Mediterranean Conference on Control and Automation (MED). IEEE, 2018. 460-465.

Pico-Valencia, Pablo, and Juan A Holgado-Terriza. Agentification of the Internet of Things: A Systematic Literature Review. International Journal of Distributed Sensor Networks 14, no. 10 (2018).

Pomerleu, Mark. US Army network modernization sets the stage for JADC2. C4ISRNet. February 9, 2022. https://www.c4isrnet.com/it-networks/2022/02/09/us-army-network-modernization-sets-the-stage-for-jadc2/ (accessed August 19, 2022).

Russell, Stewart. Human Compatible: Artificial Intelligence and the Problem of Control. New York: Viking, 2019.

Shah, Rohin. Reframing Superintelligence: Comprehensive AI Services as General Intelligence. AI Alignment Forum. January 8, 2019. https://www.alignmentforum.org/posts/x3fNwSe5aWZb5yXEG/reframing-superintelligence-comprehensive-ai-services-as (accessed August 19, 2022).

Shahar, Avin, and SM Amadae. Autonomy and machine learning at the interface of nuclear weapons, computers and people. In The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk, by Vincent Boulanin, 105-118. Stockholm: Stockholm International Peace Research Institute, 2019.

Trevino, Marty. Cyber Physical Systems: The Coming Singularity. Prism 8, no. 3 (2019): 4.

Turner, Alexander Matt, Logan Smith, Rohin Shah, Andrew Critch, and Prasad Tadepalli. Optimal Policies Tend to Seek Power. arXiv:1912.01683, 2021: 8-9.

Winder, Phil. Automating Cyber-Security With Reinforcement Learning. Winder.AI. n.d. https://winder.ai/automating-cyber-security-with-reinforcement-learning/ (accessed August 19, 2022).

Zeng, Andy, et al. Socratic Models: Composing Zero-Shot Multimodal Reasoning with Language. arXiv:2204.00598 (arXiv), April 2022.

Zewe, Adam. Does this artificial intelligence think like a human? April 6, 2022. https://news.mit.edu/2022/does-this-artificial-intelligence-think-human-0406 (accessed August 19, 2022).

Zwetsloot, Remco, and Allan Dafoe. Lawfare. Thinking About Risks From AI: Accidents, Misuse and Structure. February 11, 2019. https://www.lawfareblog.com/thinking-about-risks-ai-accidents-misuse-and-structure (accessed August 19, 2022).

[1] (Zewe 2022)

[2] (Littman, et al. 2021)

[3] (Jumper, et al. 2021)

[4] (Brown, et al. 2020)

[5] (Gary, Davis and Aaronson 2022)

[6] (Buchanan, et al. 2020)

[7] (Avatrade Staff 2021)

[8] (Russell 2019, 9-10)

[9] (Zwetsloot and Dafoe 2019)

[12] (Etzioni 2016)

[13] GO is an ancient Chinese strategy board game

[14] (Byford 2016)

[15] (Anderson 2019, 150)

[16] (Kegel 2021)

[17] (Krakovna 2020)

[18] (Omohundro 2008, 483-492)

[19] Ibid., 484.

[20] (Turner, et al. 2021, 8-9)

[21] (Baker, et al. 2020)

[22] Ibid.

[23] (Bostrom 2012, 71-85)

[24] (GCN Staff 2021)

[25] (Pomerleu 2022)

[26] (Berggren, et al. 2021)

[27] (Nguyen and Reddi 2021)

[28] Ibid.

[29] (Edison 2019)

[30] (Panfili, et al. 2018)

[31] (Winder n.d.)

[32] (Manheim 2018)

[33] Ibid.

[34] (Zeng, et al. 2022)

[35] (Drexler 2019, 18)

[36] Ibid.

[37] (Shah 2019)

[38] (Duettmann 2022)

[39] (Trevino 2019)

[40] (Pico-Valencia and Holgado-Terriza 2018)

Visit link:
Artificial Intelligence, Critical Systems, and the Control Problem - HS Today - HSToday

New artificial intelligence software has worrisome implications – The Ticker

Art produced by artificial intelligence is popping up more and more on peoples feeds without them knowing.

This art can range from simple etchings to surrealist imagery. It can look like a bowl of soup or a monster or cats playing chess on a beach.

While a boom in AI that has the capacity to create art has been electrifying the high tech world, these new developments in AI have many worrisome implications.

Despite positive uses, newer AI systems have the potential to pose as a tool of misinformation, create bias and undervalue artists skills.

In the beginning of 2021, advances in AI created deep-learning models that could generate images simply by being fed a description of what the user was imagining.

This includes OpenAIs DALL-E 2, Midjourney, Hugging Faces Craiyon, Metas Make-A-Scene, Googles Imagen and many others.

With the help of skillful language and creative ideation, these tools marked a huge cultural shift and eliminated technical human labor.

A San Francisco based AI company launched DALL-E paying homage to WALL-E, the 2008 animated movie, and Salvador Dal, the surrealist painterlast year, a system which can create digital images simply by being fed a description of what the user wants to see.

However, it didnt immediately capture the public interest.

It was only when OpenAI introduced DALL-E 2, an improved version of DALL-E, that the technology began to gain traction.

DALL-E 2 was marketed as a tool for graphic artists, allowing them shortcuts to creating and editing digital images.

Similarly, restrictive measures were added to the software to prevent its misuse.

The tool is not yet available to everyone. It currently has 100,000 users globally, and the company hopes to make it accessible to at least 1 million in the near future.

We hope people love the tool and find it useful. For me, its the most delightful thing to play with weve created so far. I find it to be creativity-enhancing, helpful for many different situations, and fun in a way I havent felt from technology in a while, CEO of OpenAI Sam Altman wrote.

However, the new technology has many alarming implications. Experts say that if this sort of technology were to improve, it could be used to spread misinformation, as well as generate pornography or hate speech.

Similarly, AI systems might show bias toward women and people of color because the data is being pulled from pools and online text which exhibit a similar bias.

You could use it for good things, but certainly you could use it for all sorts of other crazy, worrying applications, and that includes deep fakes, Professor Subbarao Kambhampati told The New York Times. Kambhampati teaches computer science at Arizona State University.

The company content policy prohibits harassment, bullying, violence and generating sexual and political content. However, users who have access can still create any sort of imagery from the data set.

Its going to be very hard to ensure that people dont use them to make images that people find offensive, AI researcher Toby Walsh told The Guardian.

Walsh warned that the public should generally be more wary of the things they see and read online, as fake or misleading images are currently flooding the internet.

The developers of DALL-E are actively trying to fight against the misuse of their technology.

For instance, researchers are attempting to mitigate potentially dangerous content in the training dataset, particularly imagery that might be harmful toward women.

However, this cleansing process also results in the generation of fewer images of women, contributing to an erasure of the gender.

Bias is a huge industry-wide problem that no one has a great, foolproof answer to, Miles Brundage, head of policy research at OpenAI, said. So a lot of the work right now is just being transparent and upfront with users about the remaining limitations.

However, OpenAI is not the only company with the potential to wreak havoc in cyberspace.

While OpenAI did not disclose its code for DALL-E 2, a London technology startup, Stability AI, shared the code for a similar, image-generating model for anyone to use and rebuilt the program with fewer restrictions.

The companys founder and CEO, Emad Mostaque, told The Washington Post he believes making this sort of technology to the public is necessary, regardless of the potential dangers. I believe control of these models should not be determined by a bunch of self-appointed people in Palo Alto, he said. I believe they should be open.

Mostaque is displaying an innately reckless strain of logic. Allowing these powerful AI tools to fall into the hands of just anyone, will undoubtedly result in drastic, wide-scale consequences.

Technology, particularly software like DALL-E 2, can easily be misused as tools to spread hate and misinformation, and therefore need to be regulated before its too late.

Here is the original post:
New artificial intelligence software has worrisome implications - The Ticker

Three Keys to Implementing Artificial Intelligence in Drug Discovery – Pharmacy Times

AI-based technologies are increasingly being used for things such as virtual screening, physics-based biological activity assessment, and drug crystal-structure prediction.

Despite the buzz around artificial intelligence (AI), most industry insiders know that the use of machine learning (ML) in drug discovery is nothing new. For more than a decade, researchers have used computational techniques for many purposes, such as finding hits, modeling drug-protein interactions, and predicting reaction rates.

Whatisnew is the hype. As AI has taken off in other industries, countless start-ups have emerged promising to transform drug discovery and design with AI-based technologies for things such as virtual screening, physics-based biological activity assessment, and drug crystal-structure prediction.

Investors have made huge bets that these start-ups will succeed. Investment reached$13.8 billionin 2020 and more than one-third of large-pharma executivesreportusing AI technologies.

Although a few AI-native candidates are in clinical trials,around 90%remain in discovery or preclinical development, so it will take years to see if the bets pay off.

Artificial Expectations

Along with big investments comes high expectationsdrug the undruggable, drastically shorten timelines, virtually eliminate wet lab work.Insider Intelligenceprojectsthat discovery costs could be reduced by as much as 70% with AI.

Unfortunately, its just not that easy. The complexity of human biology precludes AI from becoming a magic bullet. On top of this, data must be plentiful and clean enough to use.

Models must be reliable, prospective compounds need to be synthesizable, and drugs have to pass real-life safety and efficacy tests. Although this harsh reality hasnt slowed investment, it has led to fewer companies receiving funding, to devaluations, and to discontinuation of some more lofty programs, such as IBMs Watson AI for drug discovery.

This begs the question: Is AI for drug discovery more hype than hope? Absolutely not.

Do we need to adjust our expectations and position for success? Absolutely, yes. But how?

Three Keys to Implementing AI in Drug Discovery

Implementing AI in drug discovery requires reasonable expectations, clean data, and collaboration. Lets take a closer look.

1. Reasonable Expectations

AI can be a valuable part of a companys larger drug discovery program. But, for now, its best thought of as one option in a box of tools. Clarifying when, why, and how AI is used is crucial, albeit challenging.

Interestingly, investment has largely fallen to companies developing small molecules, which lend themselves to AI because theyre relatively simple compared to biologics, and also because there are decades of data upon which to build models. There is also great variance in the ease of applying AI across discovery, with models for early screening and physical-property prediction seemingly easier to implement than those for target prediction and toxicity assessment.

Although the potential impact of AI is incredible, we should remember that good things take time.Pharmaceutical Technologyrecently askedits readers to project how long it might take for AI to reach its peak in drug discovery, and by far, the most common answer was more than 9 years.

2. Clean Data

The main challenge to creating accurate and applicable AI models is that the available experimental data is heterogenous, noisy, and sparse, so appropriate data curation and data collection is of the utmost importance.

This quote from a2021Expert Opinion on Drug Discoveryarticlespeaks wonderfully to the importance of collecting clean data. While it refers to ADEMT and activity prediction models, the assertion also holds true in general. AI requires good data, and lots of it.

But good data are hard to come by. Publicly available data can be inadequate, forcing companies to rely on their own experimental data and domain knowledge.

Unfortunately, many companies struggle to capture, federate, mine, and prepare their data, perhaps due to skyrocketing data volumes, outdated software, incompatible lab systems, or disconnected research teams. Success with AI will likely elude these companies until they implement technology and workflow processes that let them:

3. Collaboration

Companies hoping to leverage AI need a full view of all their data, not just bits and pieces. This demands a research infrastructure that lets computational and experimental teams collaborate, uniting workflows and sharing data across domains and locations. Careful process and methodology standardization is also needed to ensure that results obtained with the help of AI are repeatable.

Beyond collaboration within organizations, key industry players are also collaborating to help AI reach its full potential, making security and confidentiality key concerns. For example, many large pharma companies have partnered with start-ups to help drive their AI efforts.

Collaborative initiatives, such as the MELLODDY Project, have formed to help companies leverage pooled data to improve AI models and vendors such as Dotmatics are building AI models using customers collective experimental data.

About the Author

Haydn Boehm is Director of Product Marketing at Dotmatics, a leader in R&D scientific software connecting science, data, and decision-making. Its enterprise R&D platform and scientists favorite applications drive efficiency and accelerate innovation.

Original post:
Three Keys to Implementing Artificial Intelligence in Drug Discovery - Pharmacy Times

Artificial Intelligence (AI) Robots Market Projected to Reach worth $35.3 billion by 2026 Exclusive Report by MarketsandMarkets – GlobeNewswire

Chicago, Sept. 01, 2022 (GLOBE NEWSWIRE) -- Artificial Intelligence (AI) Robots Marketby Robot Type (Service, and Industrial), Technology (Machine Learning, Computer Vision, Context Awareness, and NPL), Offering, Application, and Geography (2021-2026)", Players profiled in this report are SoftBank (Japan), NVDIA (US), Intel (US), Microsoft (US), IBM (US), Hanson Robotics (China), Alphabet (US), Xilinx (US), ABB (Switzerland), Fanuc (Japan), Alphabet (US), Harman International (US), Kuka (Germany), Blue Frog Robotics (Paris).

Ask for PDF Brochure: https://www.marketsandmarkets.com/pdfdownloadNew.asp?id=120550497

Browse in-depth TOC on Artificial Intelligence (AI) Robots Market178 - Tables81- Figures253 Pages

NVIDIA develops GPUs and delivers value to its consumers through PC, mobile, and cloud architectures. From focus on PC graphics, the company now emphasizes machine learning and various other AI technologies. NVIDIA addresses four large markets: gaming, visualization, data center, and automotive. NVIDIA has two reportable segments: Graphics and Compute & Networking. The Graphics segment includes GeForce GPUs for gaming and PCs, the GeForce NOW game-streaming service and related infrastructure, and solutions for gaming platforms; Quadro/NVIDIA RTX GPUs for enterprise design; GRID software for cloud-based visual and virtual computing; and automotive platforms for infotainment systems.

Intel provides computing, networking, data storage, and communication solutions worldwide. The company designs and develops key products and technologies that power the cloud and smart, connected world. Intel delivers computer, networking, and communication platforms to a broad set of customers, including OEMs, original design manufacturers (ODMs), cloud and communications service providers, and industrial, communications, and automotive equipment manufacturers. The company manufactures semiconductor chips, supplies the computing and communications industries with chips, boards, systems, and software that are integral in computers, servers, and networking and communications products.

Inquiry Before Buying: https://www.marketsandmarkets.com/Enquiry_Before_BuyingNew.asp?id=120550497

This research report categorizes the AI Robots market based on offering, robot type, technology, deployment mode, application and region.

AI Robots Market, by offering

AI Robots Market, by Robot Type

AI Robots Market, by Technology

AI Robots Market, by Deployment mode

AI Robots Market, by Application

Implementing automation technology and installing industrial robots throughout the production processes has helped industrial businesses enable human employees to dedicate more time to other demanding projects. This has improved quality, reduced risks for associates with dangerous tasks, and lowered the overall operational costs. As labor costs rise, automation technologies come as alternate options. Robots help complete monotonous tasks more quickly and consistently than humans.

With the adoption of technologies such as cloud computing, robots are now becoming networked. For instance, Ozobot & Evollve (US) offers Evo, which is equipped with OzoChat software for worldwide messaging between Evo robots. These networked robots can potentially be hacked, and their abilities can be adversely used. Also, global military & defense sector has started considering AI-based robots as a vital part of any military fleet.

AI-integrated robots are gaining traction with the increasing requirement of social robots to interact with humans and for assistance, among others. Assistant robots need to perform various tasks involving home security, patient care, companionship, and elderly assistance. Companies are now increasingly focusing on developing robots that are suitable for the entire family and excel in performing the abovementioned tasks.

Related Reports:

Artificial Intelligence in Manufacturing Market by Offering (Hardware, Software, and Services), Industry, Application, Technology (Machine Learning, Natural Language Processing, Context-aware Computing, Computer Vision), & Region (2022-2027)

Excerpt from:
Artificial Intelligence (AI) Robots Market Projected to Reach worth $35.3 billion by 2026 Exclusive Report by MarketsandMarkets - GlobeNewswire

Intel ups its game in Artificial Intelligence; takes it to Indian schools – The Financial Express

As Artificial Intelligence (AI) becomes more mainstream, technology companies such as Intel seems to latched on to the trend. Intel plans to launch several initiatives such as AI for future workforce and AI for current workforce by the end of this year with an aim to build skill-ready workforce, Shweta Khurana, senior director Asia Pacific and Japan (APJ), government partnerships and initiatives, global government affairs, Intel told FE Education Online. AI for future workforce will cater to 18 years and above and AI for current workforce is for professionals with primary focus on women driven small and medium enterprises (SMEs), Khurana said. The programme will be delivered virtually by an Intel certified coach.

As per the company, the curriculum designed for AI for future workforce is technical; however; students does not require any prior domain knowledge. Furthermore, projects under the programme are focussed on industrial impacts such as common trade application, predictive maintenance, viral post protection, insurance fraud protection among others. Through virtual training in a real-world environment for three months learners will be exposed to the challenges, and how to build solutions for the same, Khurana added.

Earlier initiatives from Intel included Intel AI for youth, Responsible AI for Youth and AI for All based on Intel AI for Citizens program. Under these three initiatives, over 3,50,000 students have been trained with AI skills since 2019.

Under Intel AI for youth programmes the learners acquire technical skills in data science, computer vision and natural language processes as well as social skills focused on AI ethics and biases and AI solutions-building. For this, Intel has collaborated with the Central Board of Secondary Education (CBSE), and the Ministry of Education (MoE) and AI curriculum for students, setting up focused AI skill labs, and creation of AI-readiness by skilling facilitators in CBSE schools.

Responsible AI for youth initiative empowers government school students between eight to 12th grade with a new-age technology mindset, relevant skill sets, and access to required tools. Intel has collaborated with the Ministry of Electronics and Information Technology (MEitY), Government of India (GoI), National e-Governance Division (NeGD), to launch this programme.

In July 2021, Intel launched the AI For All initiative in collaboration with the MoE with the purpose of creating a basic understanding of AI for everyone in India. AI For All is a four hour, self-paced learning program that demystifies AI in an inclusive manner. It is applicable to a student or a stay-at-home parent, a professional in any field or even a senior citizen. The programme is aimed to demystify AI for one million Indian citizens in its first year.

Furthermore, in March 2022, Intel joined hands with the Department of Science and Technology (DST), GoI for its programme Building AI Readiness among Young Innovators that aims to build digital readiness among students between six to 10 grade enrolled under DSTs INSPIRE Awards MANAK scheme. The programme aims to build an AI-ready generation to empower students with the knowledge and skills to leverage AI in an inclusive way.

Also Read:Byjus likely to raise over $500 mn at $23 bn valuation

Follow us onTwitter,Facebook,LinkedIn

Original post:
Intel ups its game in Artificial Intelligence; takes it to Indian schools - The Financial Express

The future of AI in music is now. Artificial Intelligence was in the music industry long before FN Meka. – Grid

Music has forever been moved by technology from the invention of the phonograph, to Bob Dylan pivoting from acoustic to electric guitar, to the ubiquity of streaming platforms and, most recently, an ambitious attempt at crossing AI with commercial music.

FN Meka, introduced in 2021 as a virtual rapper whose lyrics and beats were constructed with proprietary AI technology, had a promising rise.

But just days after he signed on with Capitol Records the label that carried The Beatles, Nat King Cole and The Beach Boys and released his debut track Florida Water, the record company dropped him. His pink slip was a response in part to fans and activists widely criticizing his image a digital avatar with face tattoos, green braids and a golden grill and decrying his blend of stereotypes and slur-infused lyrics.

The AI artist, voiced by a real person and created by a company called Factory New, was not, technologically, a groundbreaking experiment. But it was a needle-mover for a discussion that is imminent within the industry: How AI will continue to shape how we experience music.

In 1984, classical trombonist George Lewis used three Apple II computers to program Yamaha digital synthesizers to improvise along with a live quartet. The resulting record a syrupy and spacey co-creation of computer and human musicians was titled Rainbow Family, and is considered by many as the first instance of artificially intelligent music

In the years since, advances in mixing boards popularized the practice of sampling and interpolation igniting debates about remixing old songs to make new ones (art form or cheap trick?) and Auto-Tune became a central tool in singers recorded and onstage performances.

FN Meka isnt the only AI artist out there. Some have been introduced, and lasted, with less commercial backing. YONA, a virtual singer-songwriter and AI poet made by Ash Koosha, has performed live at music festivals around the globe, including MUTEK in Montreal, Rewire in the Netherlands and Barbican in the U.K.

In fact, the most crucial and successful partnerships between AI and music have been under the hood, said Patricia Alessandrini, a composer, sound artist and researcher at Stanford Universitys Center for Computer Research in Music and Acoustics.

During the pandemic, the music world leaned heavily on digital tools to overcome challenges of sharing and playing music while remote, Alessandrini said. JackTrip Virtual Studio, for example, was an online platform used to teach university music lessons while students were remote. It minimized time delay, making audiovisual synchronicity much easier, and was born from machine learning sound research.

And for producers who deal with large music files and digital compression, AI can play a role in signal processing, Alessandrini said. This is important for sound engineers and musicians alike, saving time and helping them more smoothly create, or export, big records.

There are beneficial applications for technology and music to intersect when it comes to accessibility, she said. Instruments have been made using AI to require less strength or pressure in order to generate sound, for example allowing those with injuries or disabilities to play with eye movements alone.

Alessandrinis own projects include the Piano Machine which uses computers and voltages as fingers to create new sounds and Harp Fingers, a technology that allows users to play a harp without physically touching it.

On a meta level, algorithms are the ubiquitous drivers of online streaming platforms Spotify, Apple Music, SoundCloud, YouTube and others are constantly using machine learning, in less transparent ways, to personalize playlists, releases, lists of nearby concerts and music recommendations.

Less agreed upon is the concept of an AI artist itself. Reactions have been split among those loyal to the humanity of art; some who argued that if certain artists were indistinguishable from AI, then they deserved to be replaced; others who invited the newness; and many whose feelings fall somewhere in between.

With any cultural form, part of what youre dealing with are peoples expectations for what things sound like or what an artist looks like, Oliver Wang, a music writer and sociology professor at California State University, Long Beach, told Grid.

Some experts argue that those questions leave out a critical point: Whatever the technology, there is always a human behind the work and that should count.

Sometimes people dont know or see how much human work is behind artificial intelligence, said Adriana Amaral, a professor at UNISINOS in Brazil and expert in pop culture, influencers and fan studies. Its a team of people developers, programmers, designers, people from production and marketing.

But this misunderstanding isnt always the fault of the public, said Alessandrini. It often comes down to marketing. Its more exciting to say that somethings made entirely by AI, Alessandrini said. This was how FN Meka was marketed and promoted online as an AI artist. But while his lyrics, sound and beats were AI-generated, they then were performed by a human and animated, cartoon-style.

If it sounds strange that one would become a dedicated fan of a virtual persona, it shouldnt, Amaral said. The world of competitive video gaming, which is nothing without its on-screen characters, is a multibillion-dollar industry that sells out arenas worldwide.

Still, music purists and audiophiles and any person who appreciates music as an experience, rather than just entertainment may very well resist AI musicians. In particular, Alessandrini said, AI is better at generating content faster and copying genres, though unable to innovate new ones a result of training their computing models, largely, using what music already exists.

When a rap artist has these different influences and their own specific cultural experience, then thats the kind of magical thing that they use to create, Alessandrini said. You can say that Bobby Shmurda is one of the first Brooklyn drill artists because of a particular song. So thats a [distinctly] human capacity, compared to AI.

Alessandrini likens this artistic experience to the advancements of AI in medicine the applications of robotic technologies used during surgeries that are more efficient and mitigate the risk of human error. But, she said, there are some things that humans do better caring for a patient, understanding their suffering.

Its hard to imagine AI vocals ever reaching the emotional and beautifully human depths, say, of a Nina Simone or Ann Peebles; or channeling the authentic camaraderie and bounce of a group like OutKast.

In 2017, the French government commissioned mathematician and politician Cdric Villani to lay ambitious groundwork for the countrys artificially intelligent (AI) future.

His strategy, one that considered economics, ethics and education, foremost straddled the thinning line between creation and consumption.

The division between the noncreative machine and the creative human is ever less clear-cut, he wrote. Creativity, he went on to say, was no longer just an artists skill it was a necessary tool for a world of co-inhabitance, machine and human together.

Is that what is happening?

One cant talk about music on grand scales without also talking about money. Though FN Meka was a failure, AI has strong ties to the music sphere that wont be broken because one AI rapper got cut from a label. And it feels inevitable that another big record company or music festival will give it a go.

Why? It might all come down to cost, say experts and music listeners who run the cynicism gamut.

Wang said he has a sneaking suspicion that record companies and executives see AI musicians as a way to save money on royalty payments and travel costs moving forward.

Beyond the money-hungry music industry, there is also room for a lot of good moving forward with AI, said Amaral. She hopes FN Mekas image, and how he was received, was a wake-up call for whatever AI artist inevitably comes next. She also mentioned YONA, which she saw in concert in Japan, as a thin, white, able pop star not unlike many who dominate the music scene today.

We have all the technological tools to make someone who could be green, or fat or any way we like, and we still are stuck on these patterns, she said.

What will the landscape look like five or 10 or 15 years from now? Wang asks. Pop music, despite peoples cynicism, rarely stays static. Its constantly changing, and perhaps these computer-based attempts at creating artists will be part of that change.

Thanks to Dave Tepps for copy editing this article.

Visit link:
The future of AI in music is now. Artificial Intelligence was in the music industry long before FN Meka. - Grid