Daily Archives: May 4, 2021

Arize AI Named to Forbes AI 50 List of Most Promising Artificial Intelligence Companies of 2021 – PRNewswire

Posted: May 4, 2021 at 8:10 pm

BERKELEY, Calif., April 30, 2021 /PRNewswire/ --Arize AI, the leading Machine Learning (ML) Observability company, has been named to the Forbes AI 50, a list of the top private companies using artificial intelligence to transform industries.

The Forbes AI 50 list, in its third year, includes a list of private North American companies using artificial intelligence in ways that are fundamental to their operations, such as machine learning, natural language processing, and computer vision.

Today, companies spend millions of dollars developing and implementing ML models, only to see a myriad of unexpected performance degradation issues arise. Models that don't perform after the code is shipped are painful to troubleshoot and negatively impact business operations and results.

"Arize AI is squarely focused on the last mile of AI: models that are in production and making decisions that can cost businesses millions of dollars a day," said Jason Lopatecki, co-founder and CEO of Arize. "We are excited that the AI 50 panel recognizes the importance of software that can watch, troubleshoot, explain and provide guardrails on AI, as it is deployed into the real world, and views Arize AI as a leader in this category."

In partnership with Sequoia Capital and Meritech Capital, Forbes evaluated hundreds of submissions from the U.S. and Canada. A panel of expert AI judges then reviewed the finalists to hand-pick the 50 most compelling companies.

About Arize AI Arize AI was founded by leaders in the Machine Learning (ML) Infrastructure and analytics space to bring better visibility and performance management over AI. Arize AI built the first ML Observability platform to help make machine learning models work in production. As models move from research to the real world, we provide a real-time platform to monitor, explain and troubleshoot model/data issues.

Media Contact: Krystal Kirkland [emailprotected]

SOURCE Arize AI

http://www.arize.com

The rest is here:

Arize AI Named to Forbes AI 50 List of Most Promising Artificial Intelligence Companies of 2021 - PRNewswire

Posted in Ai | Comments Off on Arize AI Named to Forbes AI 50 List of Most Promising Artificial Intelligence Companies of 2021 – PRNewswire

Health At Scale Recognized In "AI And Data" Category Of Fast Company’s 2021 World Changing Ideas Awards – PRNewswire

Posted: at 8:10 pm

NEW YORK, May 4, 2021 /PRNewswire/ --The winners of Fast Company's 2021 World Changing Ideas Awards were announced today, honoring the businesses, policies, projects, and concepts that are actively engaged and deeply committed to pursuing innovation when it comes to solving health and climate crises, social injustice, or economic inequality.

Health at Scale, a machine intelligence company that uses proprietary advances in artificial intelligence and machine learning to match individuals to the next best action in real time, was recognizedin the AI and Data Category for its Precision Navigation platform. Founded by machine learning and clinical faculty from MIT, Harvard, Stanford, and the University of Michigan, the company's mission is to bring precision delivery to health care, using tens of thousands of health variables from over a hundred million lives to generate personalized and precise guidance for individual patients seeking care providers. Health at Scale's Precision Navigation looks at providers in a hyper-personalized, outcomes-based way, providing each individual with a personalized rating of providers in the geography where they want to receive care.

"We're honored to be included in this year's World Changing Ideas showcase," said Health at Scale CEO and Founder Zeeshan Syed. "Health care today is imprecise and impersonal, which makes care inefficient, less effective, and more costly. Our Precision Navigation technology looks to change this, using machine intelligence to accurately match patients to physicians, facilities and care settings likely to produce optimal outcomes for them individually. We're thankful for Fast Company highlighting our work so we can continue to change health care, focusing on personalization, not generalization."

Now in its fifth year, the World Changing Ideas Awards honors inspirational innovation for the good of society with Health and Wellness, AI & Data among the most popular categories. A panel of eminent Fast Company editors and reporters selected winners and finalists from a pool of more than 4,000 entries across transportation, education, food, politics, technology, and more. The 2021 awards feature entries from across the globe, from Brazil to Denmark to Vietnam.

Showcasing some of the world's most inventive entrepreneurs and companies tackling exigent global challenges, Fast Company's Summer 2021 issue (on newsstands May 10) highlights, among others, a lifesaving bassinet; the world's largest carbon sink, thanks to carbon-eating concrete; 3D-printed schools; an at-home COVID-19 testing kit; a mobile voting app; and the world's cleanest milk.

"There is no question our society and planet are facing deeply troubling times. So, it's important to recognize organizations that are using their ingenuity, impact, design, scalability, and passion to solve these problems," says Stephanie Mehta, editor-in-chief of Fast Company. "Our journalists, under the leadership of senior editor Morgan Clendaniel, have discovered some of the most groundbreaking projects that have launched since the start of 2020."

About Health at Scale: Health at Scale is a health care machine intelligence company that uses proprietary advances in artificial intelligence and machine learning to match individuals to the next best action in real-time and when needed most: whether it's the ideal choice of treatment, an early intervention, or the right provider. Founded by machine learning and clinical faculty from MIT, Harvard, Stanford, and the University of Michigan, the company's mission is to bring precision delivery to healthcare, using tens of thousands of health variables from over a hundred million lives to generate personalized and precise recommendations for individual patients. Health at Scale's machine intelligence is deployed at scale in real operational settings--including with some of the largest payers in the country, driving better health outcomes and affordability for its customers. The company's software solutions service a broad range of use cases: provider navigation and network design, early targeted prediction and prevention of adverse outcomes, optimized treatment planning; and fraud, waste and abuse prevention. For more information, please visit healthatscale.com.

About the World Changing Ideas Awards: World Changing Ideas is one of Fast Company's major annual awards programs and is focused on social good, seeking to elevate finished products and brave concepts that make the world better. A panel of judges from across sectors choose winners, finalists, and honorable mentions based on feasibility and the potential for impact. With the goals of awarding ingenuity and fostering innovation, Fast Company draws attention to ideas with great potential and helps them expand their reach to inspire more people to start working on solving the problems that affect us all.

SOURCE Health at Scale Corporation

https://healthatscale.com

See more here:

Health At Scale Recognized In "AI And Data" Category Of Fast Company's 2021 World Changing Ideas Awards - PRNewswire

Posted in Ai | Comments Off on Health At Scale Recognized In "AI And Data" Category Of Fast Company’s 2021 World Changing Ideas Awards – PRNewswire

Radiology Partners, Aidoc talk AI adoption, handling bias, FDA actions – MedTech Dive

Posted: at 8:10 pm

Artificial intelligence and machine learning have gained popularity in the medical device industry in recent years, with some top players in the space developing systems or buying their way into the competition.

Medtronic, GE and Philips have all invested in AI and machine learning, with claims that the technologies can better diagnose and treat patients. The wave of AI adoption and usage has brought new challenges to the healthcare industry, leading the FDA to consider adopting new regulatory review processes specifically for AI and machine learning technologies.

Rich Whitney, CEO of Radiology Partners, a U.S-based radiology practice, said that while interest in AI has been growing recently, barriers in the healthcare system like fragmentation and high amounts of regulation have still prevented widespread adoption.

One barrier is low adoption among physicians. However,Whitney said that Radiology Partner's recent partnership with AI medical imaging company Aidoc is intended to help address the problem.

"You need to make sure that physicians are on board and are properly trained and are really champions of this technology.Otherwise, it's not going to work. It's not going to be utilized;it's not going to get you the results you'd expect,"Whitney said.

Nina Kottler, associate CMO for clinical AI with Radiology Partners, contends a primary benefit of using AI systems in radiology is improving patient care by identifying health concerns more quickly. Kottler said one of the crucial features of Aidoc's algorithms is a triage system, which can flag critical exam results for radiologists to prioritize.

"When your patient has an intracranial hemorrhage or a pulmonary embolist, these are findings that if you're not detecting them ... they can have catastrophic outcomes,"Kottler said. "The earlier you get to those things, the better it is for patient outcomes."

The algorithms are also used for oncology exams, where they help identify if diseases are getting better or worse, according to Kottler.

Watchdogs, though, worry the pendulum could swing too quickly in the direction of AI.ECRI, for example, warns the technologies may be unreliable and misrepresent some patient populations, which could lead to misdiagnoses and inappropriate care decisions.

In a conversation with MedTech Dive, Whitney, Kottler and Aidoc CEO Elad Walach discuss how interest in AI has grown, handling potential bias and regulatory changes at the FDA.

This interview has been edited for clarity and brevity.

MEDTECH DIVE: How have you seen AI technology change over the last several years as this space has received more attention?

RICH WHITNEY: The technology is moving very, very rapidly. But we haven't yet crossed into that part of the evolution here where there is a significant amount of use and actual impact. The partnership with Aidocreally creates the prospect for much more widespread use of AI and really moving us into the future that we all envision,which is radiologists being enabled by AI and being able to add significantly more value to the health system.

NINA KOTTLER: There's been a lot of improvement in the technology, and a lot more options in terms of what kind of algorithms are available. But the technology on its own is insufficient. And I think what has been missing has been that connection with the radiologists. The technology is meant to be deployed in a clinical environment, and because there hasn't been a lot of deployment, there hasnt been a lot of lessons in how to do that right.

AI systems have to be deployed with the direct assistance of radiologists to make sure that they understand how these clinical systems work. We need to make sure it's integrated into their workflow, and then we need to figure out how to monitor these systems over time to make sure that both the AI and the clinician are working together to improve patient care. And that's not simple.

Do you see providers prioritizing and investing more in AI systems today than two or three years ago?

ELAD WALACH: I can definitely say yes. By the way, COVID even though it's difficult really impacted the trend of healthcare executives being able to see value and return on investment from software-based solutions. They know that there is value to be captured by utilizing the right technical infrastructure and software. So I think that in terms of prioritization, absolutely. But I think there is a lot of momentum building up by physicians and radiologists using the technology, understanding that there is value and analyzing what that value is.

The FDA is considering how best to regulate AI. For example,whether to allow algorithms to be updated without review or remain "locked." How will a change in this review process impact the industry?

KOTTLER: Instead of just locking an algorithm in at one point in time, and then waiting for that algorithm to improve and redoing that evaluation, the FDA is looking at evaluating the vendor and their practices to see if the way that the vendor updates things themselves is good enough. And if the vendors processes are good enough, then the output should be good enough. So, the agency wont actually have to check the output, they can check the vendor processes. They're just in the very beginning of it. I think they're beta testing it with a few big groups right now, so it's going to take a little while. But I think it's quite fascinating.

WALACH: It is a difficult problem the FDA is facing, and a lot of it is the flood with the number of products that are coming to market. The question that the agency is tasked with is, How do we maintain safety and efficacy while making sure that we can bring innovation to the market? The agency has been moving quite quickly in terms of creating new processes, new pathways and being very communicative with the companies. So, you've asked, Are you waiting for the FDA to do something? In some sense, yes, but it's a very active process. Its an active engagement with the agency.I do think that there are some exciting regulatory changes ahead.

One issue continually brought up with AI is bias built into algorithms. How do you work to prevent this from happening and then fix the problem if it is recognized?

WALACH: You want to make sure that on the one hand, you trained the data on a very robust, diverse set that isn't biased towards a certain population. On the other hand, we want to make sure that even after we release a product to market, we may encounter bias that was unexpected initially. We want to make sure that we keep monitoring performance over time.For me, it's battling biases with data. That data is the protector against bias in all stages of the product lifecycle.

KOTTLER: Eventually, we may end up going in the opposite direction. Right now, we're trying to have data that's as generalizable as possible so you can apply the same algorithm everywhere. But, ultimately, that means that the specificity and value for a certain patient will have to decrease, even if it's just a little bit.

As the FDA evolves, and as these AI algorithms evolve, we will be able to have an AI algorithm that's suited for a specific population, and that means it's going to be much more accurate for that population.

Where do you see AI use heading?

KOTTLER:The next area that we're getting into is predictive medicine. While medicine has always been about the treatment of disease, we really need to move more into the prevention of disease. AI can help us with the prevention of disease because it's detecting things that we may not be able to detect as humans.If we combine that information with the other information that we have as humans, we can start to predict which patients are more at risk.

For example, for breast cancer, maybe certain patients should be getting mammograms or their imaging studies much more frequently than others. Maybe we can identify if certain patients are at risk for developing a bone fracture because we can look at the quality of their bones and see which ones are the most at risk for developing osteoporosis. These are all preventative measures that I think we're going to get much more involved in.

We're going to combine that with information from the patient systems that are getting more prevalent, like wearables, to provide a more holistic view of the patient.

Read the original post:

Radiology Partners, Aidoc talk AI adoption, handling bias, FDA actions - MedTech Dive

Posted in Ai | Comments Off on Radiology Partners, Aidoc talk AI adoption, handling bias, FDA actions – MedTech Dive

Appen combats skewed AI data to ensure end-users have the same experience – TechRepublic

Posted: at 8:10 pm

The company launched diverse training data sets for natural language processing initiatives.

Image: iStock/metamorworks

Training data provider Appen data just launched recently developed diverse training data sets for natural language processing initiatives in an effort to ensure end-users will receive the same experience, regardless of language variety, dialect, ethnolect, accent, race or gender.

Appen said it realized that AI projects that are based on biased or incomplete data don't work for everyone. It is enabling organizations to launch, update and operate unbiased AI models through a variety of projects and partnerships focused on the diversity of languages and dialects, the company announced on its website.

In March, Proceedings of the National Academy of Sciences found that popular automated speech-recognition systems used for virtual assistants, closed captioning, hands-free computing and more, "exhibit significant racial disparities in performance."

SEE:Juggling remote work with kids' education is a mammoth task. Here's how employers can help (free PDF)(TechRepublic)

The report concludes "that more diverse training datasets are needed to reduce these performance differences and ensure speech recognition technology is inclusive. Language interpretation and natural language processing systems suffer from the same challenge and require the same solution."

"The quality and diversity of training data directly impacts the performance and bias present in AI models," said Mark Brayan, CEO at Appen, in a press release. "As a data partner, we can supply complete training data for many use cases to ensure AI models work for everyone. It's critical that we engage a diverse group of individuals to produce, label, and validate the data to ensure the model being trained is not only equitable, but also built responsibly."

With a goal to create AI for everyone, Appen developed a variety of projects and partnerships which focus on the diversity of languages and dialects.

As an example, the Appen website explained:

Without setting out to do so, biased AI data can set off a wave of information that is not only not valuable toward research, but can actually be detrimental.

"Biased AI data leads to projects that can fail to deliver the expected business results and harm individuals they are supposed to benefit," said Dr. Judith Bishop, senior director of AI specialists at Appen. "The scale and complexity of AI projects makes it impossible for most companies to acquire sufficient unbiased high-quality data without partnering with an AI data expert." She added, "Developing the most diverse and expert crowd of data annotators provides the industry with a clearly differentiated resource for building fair and ethical AI projects."

Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays

See the article here:

Appen combats skewed AI data to ensure end-users have the same experience - TechRepublic

Posted in Ai | Comments Off on Appen combats skewed AI data to ensure end-users have the same experience – TechRepublic

MeetKai – The Next Gen AI Assistant – Launches Today in the US – PRNewswire

Posted: at 8:10 pm

LOS ANGELES, May 3, 2021 /PRNewswire/ --MeetKai [www.meetkai.com]- the AI Personalized Conversational Search company - today launches the first voice-operated AI assistant that uses conversation, personalization, and curation to assist its users. The MeetKai app is available now for free in the US.

Unlike other AI-assistants, MeetKai's sophisticated voice recognition and AI integration remembers its users preferences and context to respond within seconds. This enables Kai to respond to questions as specific as: "Hey Kai, I wanna watch a Joaquin Phoenix movie that's not The Joker".

MeetKai's unique features include:

James Kaplan, MeetKai Co-Founder & CEO, comments: "As a user and enthusiast of known AI assistants myself, I started finding ways to make them better and took on the challenge to build a true virtual assistant in the form of the MeetKai app".

Kaplan founded MeetKai two years ago, pioneering a revolutionary new approach to search called AI Personalized Conversational Search. He comments: "We're very excited to release our AI technology for the first time, through a fun AI Virtual Assistant, accessible to everyone. The MeetKai app is a great first step for our technology capabilities, and we can't wait to share more".

Weili Dai, MeetKai Co Founder & Chairwoman, adds: "People are ready for a leapfrog beyond the basic and limited search that currently dominates the market. That's why we redefined AI assistance by giving each user a completely unique experience...and we made it free. Limits blur when there's true passion and commitment and MeetKai will be at the forefront of AI, conversation, and innovation".

MeetKai is now available in the iOS, Google Play, and AppGallery App Stores at no cost. The app is currently available in the US, Mexico, Europe, and Asia, and will be available in more countries and launching more industry technology in the near future.

About MeetKai Inc.MeetKai Inc. is a pioneering company in language recognition and search technologies. Its unmatched portfolio includes a true multi-turn search recognition system. MeetKai's technology is deployed globally through iOS, Google Play, MeetKai website, and the AppGallery. Visit http://www.meetkai.com for more info & latest MeetKai news.

Cindy Fischer 818-720-9241 [emailprotected]

SOURCE MeetKai

Excerpt from:

MeetKai - The Next Gen AI Assistant - Launches Today in the US - PRNewswire

Posted in Ai | Comments Off on MeetKai – The Next Gen AI Assistant – Launches Today in the US – PRNewswire

Sourcing teams explore Bid Ops predictive sourcing AI at SIG Procurement Technology Summit – PRNewswire

Posted: at 8:10 pm

SAN FRANCISCO, May 4, 2021 /PRNewswire/ --The 2021 SIG Procurement Technology Summit runs May 4-6, 2021 online this year due to the pandemic and sourcing professionals are more hungry than ever for advanced technology to predict and win faster savings.

Supply disruptions from the so-called "Big Freeze" in Texas to the ship blocking the Suez Canal have overwhelmed sourcing teams -- not to mention the global microchip shortage. As supply complexity increases due to the acceleration of new product launches, sourcing teams at SIG's Summit will be looking for technology solutions that can help them do more with less.

"The value of procurement within companies has fundamentally changed," says former Chief Procurement Officer Matt Ziskie, who also serves on the Bid Ops Board of Directors. "Category managers want to spend their time focused on the highest value activity that supports their key stakeholders. They shouldn't have to spend time calling up five friends to find out if they are getting a good price or not. Bid Ops shows procurement teams what they should be paying and gets the whole process out of email-and-spreadsheets, which lets buyers focus on more important aspects of their job."

That's one explanation for why Bid Ops has seen a surge of interest from manufacturing firms, growing over 300% in the past year. With a roster of customers including Bel Brands, Autotruck, Dover Chemical and Kurita Water, Bid Ops predictive sourcing driven by AI is turning heads in industries that have long embraced an Excel-based approach to sourcing.

Jean-Michel Dos Remedios, a sourcing leader at Bel Brands, said of Bid Ops capabilities: "In the procurement world, time is of the essence. Bid Ops has really helped to kickstart our digital transformation journey by giving us our time back through leveraging data and AI. These innovations make us faster, and the faster we get, the more we can accomplish, and the more successful we can be."

Bid Ops predictive sourcing platform offers a complete RFP, RFQ, RFI, Reverse Auction, and Spot Buy capability complete with KPI dashboards for tracking a team's savings pipeline alongside automated reporting on supplier diversity and sustainability. The platform includes a messaging app and document storage to get all supplier communication out of email.

"It's the only platform where you can run a fully autonomous RFQ with only a few clicks," says Bid Ops CEO Edmund Zagorin. "Sourcing teams are famously overworked. Many sourcing teams put in heroic efforts during the pandemic to secure enough PPE while at the same time re-negotiating office leases and corporate travel agreements. Now that businesses are re-opening, it's clear that sourcing teams need additional resources to continue delivering peak performance. That's why customers are turning to Bid Ops."

SIG's Procurement Technology Summit coincides with Bid Ops welcoming new team members, including long-time collaborator Eric Buras as Head of Machine Learning and Data Science. "Our engineering team is really excited by the results our customers are seeing," says Eric. "When you can see the price paid variation for the exact same item, it becomes clear that the savings opportunities are quite compelling."

About Bid OpsBid Ops is the only predictive sourcing software built to keep your business ahead of the market. Your procurement team can leverage Bid Ops predictive AI to drive 2-5x more savings by getting better quotes faster. Learn more at http://www.bidops.com

Photo(s):https://www.prlog.org/12868341

Press release distributed by PRLog

SOURCE Bid Ops

http://www.bidops.com

Follow this link:

Sourcing teams explore Bid Ops predictive sourcing AI at SIG Procurement Technology Summit - PRNewswire

Posted in Ai | Comments Off on Sourcing teams explore Bid Ops predictive sourcing AI at SIG Procurement Technology Summit – PRNewswire

The EU’s Ambitious AI Regulations: Increasing Trust or Stifling Progress? – ClearanceJobs

Posted: at 8:10 pm

European Union (EU) officials have proposed new rules that could restrict and even ban some uses of artificial intelligence (AI) within its borders. That could include some technology developed by U.S. and Chinese-based tech giants. The rules would be the most significant international effort to regulate the use of AI to date.

The Coordinated Plan on Artificial Intelligence 2021 Review, put forth by the 27-nation bloc, could set a new standard for technology regulation.

If passed, rules could impact how facial recognition, autonomous vehicles, and even algorithms that are employed in online advertising are used across the EU. It could also limit the use of AI and machine learning as it applies to automated hiring, school applications and credit scores. It would ban AI outright in situations deemed risky, including government social scoring systems where individuals are judged on their behavior.

This could be the first-ever legal framework on AI, and the EU has said the new Coordinated Plan with Member States would guarantee the safety and fundamental rights of people and business, while also strengthening AI uptake, investment and innovation across the EU.

With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted, Margrethe Vestager, the European Commissions executive vice president for the digital age, said in a statement. By setting the standards, we can pave the way for to ethical technology worldwide and ensure that the EU remains competitive along the way.

The EU has maintained that the new AI regulations could ensure that the Europeans can trust what AI has to offer, and would create flexible rules that could address the specific risks posed by AI-based systems. AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned.

High-risk use of AI would include critical infrastructure, including transport, which could put the lives and health of citizens at risk; educational and vocational training, such as the scoring of exams; and law enforcement where it could interfere with peoples fundamental rights. In those cases, the high-risk AI systems would be subject to strict obligations before they could be put on the market, and would require logging of activity to ensure traceability of results, high quality of datasets, high level of security and accuracy, and appropriate human oversight measures to minimize risk.

AI is a means, not an end, explained EU Commissioner for Internal Market Thierry Breton.

It has been around for decades but has reached new capacities fueled by computing power, added Breton. This offers immense potential in areas as diverse as health, transport, energy, agriculture, tourism or cyber security. It also presents a number of risks. Todays proposals aim to strengthen Europes position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use.

This isnt the first time the EU has attempted to create regulation around new technology that surpasses anything else in the world. This has mainly been focused on privacy, including search results, but also in how personal information can be used by tech firms.

Now it addresses the developing technologies of AI and machine learning, but the question is whether such a hard line could limit the efforts by the tech giants. Or is this the best course of action to ensure privacy, security and to maintain a fair and level playing field for all involved?

First and foremost, allowing technologies to be developed unilaterally without any oversight is an effective vote for market dominating behavior, technology industry analyst Charles King of Pund-IT, told ClearanceJobs.

Weve seen it happen among tech industry behemoths, including Facebook and Google, and in countries, like China and government agencies in the U.S. and elsewhere, King added.

The EU hold all the cards right now, and the tech firms will have to play by their rules or ignore the European market. The latter really isnt an option.

Along with trying to place some restraints on potentially damaging behavior, the EU is also acting from a position of historic strength, explained King. The organization has aggressivelypursued businesses that it believes are acting against the interests of consumers and markets, and has passed regulations including GDPR that have successfully influenced global businessesand markets.

One of the biggest concerns would be whether such strict regulations simply make it too hard for businesses to play by the EUs rules. AI could be one of those areas where this could seem like stifling but in the end could ensure that control over the technology is, in fact maintained accordingly.

The writer and futurist Arthur Clarke famously asserted, Any sufficiently advanced technology is indistinguishable from magic,' said Jim Purtilo, associate professor of computer science at the University of Maryland. If thats the definition, then todays decision systems based on artificial intelligence technologies surely qualify as magic. They are subtle, tremendously complex methods which defy full explanation as to how a given result was computed.

Part of the issue is in understanding exactly what is entailed by AI. While it is easy to think of the science-fiction version of a thinking computer or android, in reality it is just ever-more complicated algorithms.

There has always been some aura of mystery to it. In some sense, AI is the area that stops being AI once we understand it, added Purtilo.

Many fairly ordinary forms of computing for example logic systems and computer memory once fell under a heading of AI research, he noted. What differs today is the scale of decisions that people will leave to machines. It used to be that at least programmers had an understanding of how their programs computed a result, but with AI the programmers generally cant work back to justify how an outcome was reached. That programs accuracy not a big deal when all it is doing is tagging photos of your friends in an album on your phone, but it becomes a very big deal for individuals who fall under suspicion of police based on wide deployment of facial recognition technology.

Given the understanding of what is, and to some extent what is not, AI, the question then becomes whether the EU is taking its regulation of AI too far?

Im not particularly afraid of computer programs, but Im terrified of the bureaucrats who use them thinking they escape responsibility by pretending to be mere servants of science which, by virtue of AI methods, is somehow settled even if not explained, warned Purtilo.

I thus see the EUs move as being less about AI than it is about policy, Purtilo told ClearanceJobs. Government practices should offer transparency and accountability, but as AI methods offer neither, these proposed regulations represent a first attempt to push back at opaque technologies that cloak the basis for impactful decisions.

That could stifle AIs development, or perhaps simply allow it to be better controlled and managed.

Whether or not this latest effort will succeed is anyones guess, said Pund-ITs King. But the EU is taking action because it believes it should and because it can.

See the original post:

The EU's Ambitious AI Regulations: Increasing Trust or Stifling Progress? - ClearanceJobs

Posted in Ai | Comments Off on The EU’s Ambitious AI Regulations: Increasing Trust or Stifling Progress? – ClearanceJobs

AI and gamification being used to help Aussie farmers reduce spray drift impact – ZDNet

Posted: at 8:10 pm

Image: Monash University/Screenshot

While pesticide spraying protects crops against pests, weeds, and diseases, it can also be harmful to neighbouring crops and wildlife.

This unwanted movement of pesticides, known as spray drift, however, could potentially be trackable, thanks to a project developed between Monash University's Faculty of Information Technology, Bard AI, PentaQuest, and AgriSci.

The project combines an artificial intelligence model and augmented reality to enable farmers to see a real-time visual representation of the possible spray drift on their phones. The presentation also allows farmers to view the impact the spray drift could have on neighbouring crops if spraying were to occur during poor conditions, such as when there are strong wind speeds.

Farmers are also able use to the system to understand "what if" scenarios to improve their spray plan and understand the potential impact of spray drift.

"Information alone does not change behaviour and the use of advanced technology doesn't ensure the adoption of new platforms by farmers. By incorporating game-like design applications which drive better training and engagement outcomes, together with AI-driven decision support modelling, we're able to deliver continuous adoption and accurate decision support that informs farmers appropriately," Monash University Faculty of IT interim dean Ann Nicholson said.

Bard AI founder Ross Pearson said the solution has initially been developed to focus on spray drift for large acres but believes it could be used for other applications.

"Our solution combines leading-edge thinking and technology in behavioural science and probabilistic modelling to deliver an engaging experience for farmers that supports them through better decision-making," he said.

In March, Australian agtech firm Agerris rolled outs its "drone on wheels" robot onto the Victorian-based SuniTAFE Smart Farm to support the farm's operations and train technical staff on-site.

The Digital Farmhand was developed to help farmers improve crop yield, pest and weed detection, as well as reduce the need for pesticides.

Each mobile roving robot runs on solar energy and features navigation sensors, laser sensors, infrared sensors, cameras. It also has an artificial intelligence system that can create weed heat maps, as well as detect each individual crop and determine its yield estimation, plant size, and fruit and flowering count.

Inquiry says technology could boost the value of Australia's agriculture sector by AU$20b

The House of Representatives Standing Committee on Agriculture and Water Resources also put forward 13 recommendations.

Australia's report on agtech confirms technology can lead to a fertile future

Sensors, robotics, AI, and blockchain are outlined as some of the future technologies that can improve the sector's advancement.

CSIRO using artificial intelligence to map 1.7m Australian grain paddocks

It developed ePaddocks for the agriculture sector to better understand the boundaries of grain paddocks across the country.

The Yield scores AU$11 million in funding from Yamaha Motor Ventures

The company's major shareholder Bosch Group has also converted its existing loan into equity.

Visit link:

AI and gamification being used to help Aussie farmers reduce spray drift impact - ZDNet

Posted in Ai | Comments Off on AI and gamification being used to help Aussie farmers reduce spray drift impact – ZDNet

Artificial intelligence is learning how to dodge space junk in orbit – Space.com

Posted: at 8:10 pm

An AI-driven space debris-dodging system could soon replace expert teams dealing with growing numbers of orbital collision threats in the increasingly cluttered near-Earth environment.

Every two weeks, spacecraft controllers at the European Space Operations Centre (ESOC) in Darmstadt, Germany, have to conduct avoidance manoeuvres with one of their 20 low Earth orbit satellites, Holger Krag, the Head of Space Safety at the European Space Agency (ESA) said in a news conference organized by ESA during the 8th European Space Debris Conference held virtually from Darmstadt Germany, April 20 to 23. There are at least five times as many close encounters that the agency's teams monitor and carefully evaluate, each requesting a multi-disciplinary team to be on call 24/7 for several days.

"Every collision avoidance manoeuvre is a nuisance," Krag said. "Not only because of fuel consumption but also because of the preparation that goes into it. We have to book ground-station passes, which costs money, sometimes we even have to switch off the acquisition of scientific data. We have to have an expert team available round the clock."

The frequency of such situations is only expected to increase. Not all collision alerts are caused by pieces of space debris. Companies such as SpaceX, OneWeb and Amazon are building megaconstellations of thousands of satellites, lofting more spacecraft into orbit in a single month than used to be launched within an entire year only a few years ago. This increased space traffic is causing concerns among space debris experts. In fact, ESA said that nearly half of the conjunction alerts currently monitored by the agency's operators involve small satellites and constellation spacecraft.

ESA, therefore, asked the global Artificial Intelligence community to help develop a system that would take care of space debris dodging autonomously or at least reduce the burden on the expert teams.

"We made a large historic data set of past conjunction warnings available to a global expert community and tasked them to use AI [Artificial Intelligence] to predict the evolution of a collision risk of each alert over the three days following the alert," Rolf Densing, Director of ESA Operations said in the news conference.

"The results are not yet perfect, but in many cases, AI was able to replicate the decision process and correctly identify in which cases we had to conduct the collision avoidance manoeuvre."

Related: Astronomers ask UN committee to protect night skies from megaconstellations

The agency will explore newer approaches to AI development, such as deep learning and neural networks, to improve the accuracy of the algorithms, Tim Flohrer, the Head of ESA's Space Debris Office told Space.com.

"The standard AI algorithms are trained on huge data sets," Flohrer said. "But the cases when we had actually conducted manoeuvres are not so many in AI terms. In the next phase we will look more closely into specialised AI approaches that can work with smaller data sets."

For now, the AI algorithms can aid the ground-based teams as they evaluate and monitor each conjunction alert, the warning that one of their satellites might be on a collision course with another orbiting body. According to Flohrer, such AI-assistance will help reduce the number of experts involved and help the agency deal with the increased space traffic expected in the near future. The decision whether to conduct an avoidance manoeuvre or not for now still has to be taken by a human operator.

"So far, we have automated everything that would require an expert brain to be awake 24/7 to respond to and follow up the collision alerts," said Krag. "Making the ultimate decision whether to conduct the avoidance manoeuvre or not is the most complex part to be automated and we hope to find a solution to this problem within the next few years."

Ultimately, Densing added, the global community should work together to create a collision avoidance system similar to modern air-traffic management, which would work completely autonomously without the humans on the ground having to communicate.

"In air traffic, they are a step further," Densing said. "Collision avoidance manoeuvres between planes are decentralised and take place automatically. We are not there yet, and it will likely take a bit more international coordination and discussions."

Not only are scientific satellites at risk of orbital collisions, but spacecraft like SpaceX's Crew Dragon could be affected as well. Recently, Crew Dragon Endeavour, with four astronauts on board, reportedly came dangerously close to a small piece of debris on Saturday, April 24, during its cruise to the International Space Station. The collision alert forced the spacefarers to interrupt their leisure time, climb back into their space suits and buckle up in their seats to brace for a possible impact.

According to ESA, about 11,370 satellites have been launched since 1957, when the Soviet Union successfully orbited a beeping ball called Sputnik. About 6,900 of these satellites remain in orbit, but only 4,000 are still functioning.

Follow Tereza Pultarova on Twitter @TerezaPultarova. Follow us on Twitter @Spacedotcom and on Facebook.

View post:

Artificial intelligence is learning how to dodge space junk in orbit - Space.com

Posted in Ai | Comments Off on Artificial intelligence is learning how to dodge space junk in orbit – Space.com

Podcast: AI finds its voice – MIT Technology Review

Posted: at 8:10 pm

Todays voice assistants are still a far cry from the hyper-intelligent thinking machines weve been musing about for decades. And its because that technology is actually the combination of three different skills: speech recognition, natural language processing, and voice generation.

Each of these skills already presents huge challenges. In order to master just the natural language processing part? You pretty much have to re-create human-level intelligence. Deep learning, the technology driving the current AI boom, can train machines to become masters at all sorts of tasks. But it can only learn one at a time. And because most AI models train their skill set on thousands or millions of existing examples, they end up replicating patterns within historical dataincluding the many bad decisions people have made, like marginalizing people of color and women.

Still, systems like the board-game champion AlphaZero and the increasingly convincing fake-text generator GPT-3 have stoked the flames of debate regarding when humans will create an artificial general intelligencemachines that can multitask, think, and reason for themselves. In this episode, we explore how machines learn to communicateand what it means for the humans on the other end of the conversation.

This episode was produced by Jennifer Strong, Emma Cillekens, Anthony Green, Karen Hao, and Charlotte Jee. Were edited by Michael Reilly and Niall Firth.

[TR ID]

Jim: I don't know if it was AI If they had taken the recording of something he had done... and were able to manipulate it... but I'm telling you, it was my son.

Strong: The day started like any other for a man... were going to call Jim. He lives outside Boston.

And by the way... he has a family member who works for MIT.

Were not going to use his last name because they have concerns about their safety.

Jim: It was a Tuesday or Wednesday morning, nine o'clock I'm deep in thought working on something,

Strong: That is... until he received this call.

Jim: The phone rings and I pick it up and it's my son. And he is clearly agitated. This, this kid's a really chill guy but when he does get upset, he has a number of vocal mannerisms. And this was like, Oh my God, he's in trouble.

And he basically told me, look, I'm in jail, I'm in Mexico. They took my phone. I only have 30 seconds. Um, they said I was drinking, but I wasn't and people are hurt. And look, I have to get off the phone, call this lawyer and it gives me a phone number and has to hang up.

Strong: His son is in Mexico and theres just no doubt in his mind its him.

Jim: And I gotta tell you, Jennifer, it, it was him. It was his voice. It was everything. Tone. Just these little mannerisms, the, the pauses, the gulping for air, everything that you could imagine.

Strong: His heart is in his throat...

Jim: My hair standing on edge

Strong: So, he calls that phone number A man picks up and he offers more details on whats going on.

Jim: Your son is being charged with hitting this car. There was a pregnant woman driving whose arm was broken. Her daughter was in the back seat.. is in critical condition and they are, um, they booked him with driving under the influence. We don't think that he has done that. This is we've, we've come across this a number of times before, but the most important thing is to get him out of jail, get him safe, as fast as possible.

Strong: Then the conversation turns to money hes told bail has been set and he needs to put down ten percent.

Jim: So as soon as he started talking about money, you know, the, the flag kind of went up and I said, excuse me, is there any chance that this is a scam of some sort? And he got really kind of, um, irritated. He's like, Hey, you called me. Look, I find this really offensive that you're accusing me of something. And then my heart goes back in my throat. I'm like, this is the one guy who's between my son and even worse jail. So I backtracked

[Music]

My wife walks in 10 minutes later and says, well, you know, I was texting with him late last night. Like this is around the time probably that he would have been arrested and jailed. So, of course we text him, he's just getting up. He's completely fine.

Strong: Hes still not sure how someone captured the essence of his sons voice. But he has some theories...

Jim: They had to have gotten a recording of something when he was upset. That's the only thing that I can say, cause they couldn't have mocked up some of these things that he does. They couldn't guess at that. I don't think, and so they, I think they had certainly some raw material to work with and then what they did with it from there. I don't know.

Strong: And its not just Jim who's unsure We have no idea whether AI had anything to do with this.

But, the point is we now live in a world where we also cant be sure that it didnt.

Its incredibly easy to fake someones voice with even a few minutes of recordings and teenagers like Jims son? They share countless recordings through social media posts and messages.

Jim: I was quite impressed with how good it was. Um, like I said, I'm not easily fooled and man, they had it nailed. So, um, just caution.

Strong: Im Jennifer Strong and this episode we look at what it takes to make a voice.

[SHOW ID]

Zeyu Gin: You guys have been making weird stuff online.

Strong: Zeyu Jin is a research scientist at Adobe This is him speaking at a company conference about five years ago showing how software can rearrange the words in this recording.

Key: I jumped on the bed and I kissed my dogs and my wifein that order.

Zeyu: So how about we mess with who he actually kissed. // Introducing Project VoCo. Project VoCo allows you to edit speech in text. So lets bring it up. So I just load this audio piece in VoCo. So as you can see we have the audio waveform and we have the text under it. //

So what do we do? Copy paste. Oh! Yeah its done. Lets listen to it.

Key: And I kissed my wife and my dogs.

Zeyu: Wait theres more. We can actually type something thats not here.

Key: And I kissed Jordan and my dogs.

Strong: Adobe never released this prototype but the underlying technology keeps getting better.

For example, heres a computer-generated fake of podcaster Joe Rogan from 2019... It was produced by Squares AI lab called Dessa to raise awareness about the technology.

Rogan: 10-7 Friends, I've got something new to tell all of you. Ive decided to sponsor a hockey team made up entirely of chimps.

Strong: While it sounds like fun and games experts warn these artificial voices could make some types of scams a whole lot more common. Things like what we heard about earlier.

Mona Sedky: Communication focused crime has historically been lower on the totem pole.

Strong: Thats federal Prosecutor Mona Sedky speaking last year at the Federal Trade Commission about voice cloning technologies.

Mona Sedky: But now with the advent of things like deep fake video now deep fake audio you you can basically have anonymizing tools and be anywhere on the internet you want to be. anywhere in the world and communicate anonymously with people. So as a result there has been an enormous uptick in communication focused crime.

Balasubramaniyan: But imagine if you as a CFO or chief controller gets a phone call that comes from your CEOs phone number.

Strong: And this is Pindrop Security CEO Vijay Balasubramaniyan at a security conference last year.

Balasubramaniyan: Its completely spoofed so it actually uses your address book, and it shows up as your CEOs name... and then on the other end you hear your CEOs voice with a tremendous amount of urgency. And we are starting to see crazy attacks like that. There was an example that a lot of press media covered, which is a $220,000 wire that happened because a CEO of a UK firm thought he was talking to his parent company so he then sent that money out. But weve seen as high as $17 million dollars go out the door.

Strong: And the very idea of fake voices... can be just as damaging as a fake voice itself Like when former president Donald Trump tried to blame the technology for some offensive things he said that were caught on tape.

But like any other tech its not inherently good or bad its just a tool... and I used it in the trailer for season one to show what the technology can do.

Strong: If seeing is believing...

How do we navigate a world where we cant trust our eyes... or ears?

And so you know... what youre listening to... Its not just me speaking. I had some help from an artificial version of my voice filling in words here and there.

Meet synthetic Jennifer.

Synthetic Jennifer: Hi there, folks!

Strong: I can even click to adjust my mood

Synthetic Jennifer: Hi there.

Strong: Yeah, lets not make it angry..

Strong: In the not so distant future this tech will be used in any number of ways for simple tweaks to pre-recorded presentations even... to bring back the voices of animated characters from a series

In other words, artificial voices are here to stay. But they havent always been so easy to make and I called up an expert whose voice might sound familiar..

Bennett: How does this sound? Um, maybe I could be a little more friendly. How are you?

Hi, I'm Susan C. Bennet, the original voice of Siri.

Well, the day that Siri appeared, which was October 4, 2011, a fellow voice actor emailed me and said, Hey, we're playing around with this new iPhone app, isn't this you? And I said, what? I went on the Apple site and listened... and yep. That was my voice. [chuckles]

Strong: You heard that right. The original female voice that millions associate with Apple devices? Had no idea. And she wasnt alone. The human voices behind other early voice assistants were also taken by surprise.

Bennett: Yeah, it's been an interesting thing. It was an adjustment at first as you can imagine, because I wasn't expecting it. It was a little creepy at first, I'll have to say, I never really did a lot of talking to myself as Siri, but gradually I got accepting of it and actually it ended up turning into something really positive so

Strong: To be clear, Apple did not steal Susan Bennetts voice. For decades, shes done voice work for companies like McDonalds and Delta Airlines and years before Siri came out she did a strange series of recordings that fueled its development.

Bennett: In 2005, we couldn't have imagined something like Siri or Alexa. And so all of us, I've talked to other people who've had the same experience, who have been a virtual voice. You know we just thought we were doing just generic phone voice messaging. And so when suddenly Siri appeared in 2011, it's like, I'm who, what, what is this? So, it was a genuine surprise, but I like to think of it as we were just on the cutting edge of this new technology. So, you know, I choose to think of it as a very positive thing, even though, we, none of us, were ever paid for the millions and millions of phones that our voices are heard on. So that's, that's a downside.

Strong: Something else thats awkward... she says Apple never acknowledged her as the American voice of Siri thats despite becoming an accidental celebrity... reaching millions.

Bennett: The only actual acknowledgement that I've ever had is via Siri. If you ask Siri "Who is Susan Bennett?" she'll say, I'm the original voice of Siri. Thanks so much, Siri. Appreciate it.

Strong: But its not the first time shes given her voice to a machine.

Bennett: In the late 70s when they were introducing ATMs I like to say it was my first experience as a machine, and you know, there were no personal computers or anything at that time and people didn't trust machines. They wouldn't use the ATMs because they didn't trust the machines to give them the right money. They, you know, if they put money in the machine they were afraid they'd never see it again. And so a very enterprising advertising agency in Atlanta at the time called McDonald and Little decided to humanize the machine. So they wrote a jingle and I became the voice of Tilly the all-time teller and then they ultimately put a little face on the machine.

Strong: The human voice helps companies build trust with consumers...

Bennett: There are so many different emotions and meanings that we get across through the sound of our voices rather than just in print. That's why I think emojis came up because you can't get the nuances in there without the voice. And so I think that's why voice has become such an important part of technology.

Strong: And in her own experience, interactions with this synthetic version of her voice have led people to trust and confide in her to call her a friend, even though theyve never met her.

Bennett: Well, I think the oddest thing about being the voice of Siri, to me, is when I first revealed myself, it was astounding to me how many people considered Siri their friend or some sort of entity that they could really relate to. I think they actually in many cases think of her as human.

Strong: Its estimated the global market for voice technologies will reach nearly 185-billion dollars this year... and AI-generated voices? are a game changer.

Bennett: You know, after years and years of working on these voices, it's really, really hard to get the actual rhythm of the human voice. And I'm sure they'll probably do it at some point, but you will notice even to this day, you know, you'll listen to Siri or Alexa or one of the others and they'll be talking along and it sounds good until it doesn't. Like, Oh, I'm going to the store. You know, there's some weirdness in the rhythmic sense of it.

Strong: But even once human-like voices become commonplace... shes not entirely sure that will be a good thing.

Bennett: But you know, the advantage for them is they don't really have to get along with Siri. They can just tell Siri what to do if they don't like what she says, they can just turn it off. So it is not like real human relations. It's like maybe what people would like human relations to be. Everybody does what I want. (laughter) Then everybody's happy. Right?

Strong: Of course, voice assistants like Siri and Alexa arent just voices. Their capabilities come from the AI behind the scenes too.

Its been explored in science fiction films like this one, called Her about a man who falls in love with his voice assistant.

Theodore: How do you work?

Samantha (AI): Well... Basically I have intuition. I mean.. The DNA of who I am is based on the millions of personalities of all the programmers who wrote me, but what makes me me is my ability to grow through my experiences. So basically in every moment I'm evolving, just like you.

Strong: But todays voice assistants are a far cry from the hyper-intelligent thinking machines weve been musing about for decades.

And its because that technology... is actually many technologies. Its the combination of three different skills...speech recognition, natural language processing and voice generation.

Speech recognition is what allows Siri to recognize the sounds you make and transcribe them into words. Natural language processing turns those words into meaning... and figures out what to say in response. And voice generation is the final piece... the human element... that gives Siri the ability to speak.

Each of these skills is already a huge challenge... In order to master just the natural language processing part? You pretty much have to re-create human-level intelligence.

And were nowhere near that. But weve seen remarkable progress with the rise of deep learning helping Siri and Alexa be a little more useful.

Metz: What people may not know about Siri is that original technology was something different.

Strong: Cade Metz is a tech reporter for the New York Times. His new book is called Genius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World.

Metz: The way that Siri was originally built... You had to have a team of engineers, in a room, at their computers and piece by piece, they had to define with computer code how it would recognize your voice.

Strong: Back then... engineers would spend days writing detailed rules meant to show machines how to recognize words and what they mean.

And this was done at the most basic level often working with just snippets of voice at a time.

Just imagine all the different ways people can say the word hello or all the ways we piece together sentences explaining why time flies or how some verbs can also be nouns.

Metz: You can never piece together everything you need, no matter how many engineers you have no matter how rich your company is. Defining every little thing that might happen when someone speaks into their iPhone You just don't have enough person-power to build everything you need to build. It's just too complicated.

Strong: Neural networks made that process a whole lot easier They simply learn by recognizing patterns in data fed into the system.

Metz: You take that human speech You give it to the neural network And the neural network learns the patterns that define human speech. That way it can recreate it without engineers having to define every little piece of it. The neural network literally learns the task on its own. And that's the key change... is that a neural network can learn to recognize what a cat looks like, as opposed to people having to define for the machine what a cat looks like.

Strong: But even before neural networks Tech companies like Microsoft aimed to build systems that could understand the everyday way people write and talk.

And in 1996, Microsoft hired a linguist Chris Brocket... to begin work on what they called natural-language AI.

Read the original here:

Podcast: AI finds its voice - MIT Technology Review

Posted in Ai | Comments Off on Podcast: AI finds its voice – MIT Technology Review