Page 54«..1020..53545556..6070..»

Category Archives: Ai

How Nvidia is harnessing AI to improve predictive maintenance – VentureBeat

Posted: March 31, 2022 at 2:36 am

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - August 3. Join AI and data leaders for insightful talks and exciting networking opportunities. Learn more about Transform 2022

Follow along with VentureBeats coverage from Nvidias GTC 2022 event >>

The rapidly growing sectors of edge computing and the industrial metaverse were targeted by new technology developments, like sensor architecture, released by Nvidia last week at its GTC 2022 conference. Last week, the company also debuted the Isaac Nova Orin, its latest computing and sensor architecture powered by Nvidia Jetson AGX Orinhardware.

Nvidias main focus is pursuing a tech-stack-based approach starting with new silicon to help manufacturers make sense of the massive amount of asset, machinery, and tools data they generate. In addition, predictive maintenance is core to many organizations Maintenance, Repair, and Overhaul (MRO) initiatives.

CEO Jensen Huang said during this keynote that AI [artificial intelligence] data centers process mountains of continuous data to train and refine AI models. But, Huang continued, raw data comes in, is refined, and intelligence goes out companies are manufacturing intelligence and operating giant AI factories.

Accurately pursing predictive maintenance, repair, and overhaul (MRO) right is a complex, data-intensive challenge for any business that relies on assets to serve customers. MRO systems have proven effective in managing the life cycle of machinery, assets, tools, and equipment. However, they havent been able to decipher the massive amount of data in real-time that discrete and process manufacturers produce every day.

As a result, IoT Analytics predicts that the global predictive maintenance market will expand from $6.9 billion in 2021 to $28.2 billion by 2026. Edge computing architectures, more contextually intelligent sensors, and advances in AI and machine learning (ML) architectures, including Nvidias Isaac Nova Orin, are combining to drive greater adoption across asset-intensive businesses.

IoT Analytics advises that the key performance indicator to watch for is how effective predictive maintenance solutions are, how well they reduce unplanned operational equipment downtime

Not knowing whats in that real-time data slows down how fast manufacturers and services companies can innovate and respond, further driving the demand for AI-based predictive maintenance solutions. Unlocking the insights hidden in real-time asset performance and maintenance data, whether from jet engines, multi-ton production equipment, or robots, isnt possible for many enterprises today.

Nvidias announcement of the Isaac Nova Orin architecture and enhanced edge computing support is noteworthy because its purpose-built for the many data challenges predictive maintenance has. The aircraft maintenance and MRO process is a perfect example, notable for its unpredictable process times and material requirements. As a result, airlines and their services partners rely on massive time and inventory buffers to alleviate risk, which further jeopardizes when a jet or any other asset will be available.

Nvidia has identified an opportunity in edge computing to update legacy tech stacks that have long lacked support for maintenance or asset performance management with a new AI-driven tech stack that expands their total available market.

As a result, Nvidia is doubling down on edge computing efforts. Approximately one of every four sessions presented during the companys GTC 2022 event mentioned the concept. CEO Jensen Huangs keynote also underscored how edge computing is a core use case to the future of their architectures.

IoT and IIoT sensors excel at capturing preventative maintenance data in real-time from machinery, production, and other large-scale assets. AL and ML-based modeling and analysis then happen in the cloud.

For large-scale data sets and models, latency becomes a factor in how quickly the data delivers insights. Thats where edge computing comes in and why its predicted to see explosive growth in the near future. Gartner predicts that by 2023, more than 50% of all data analysis by deep neural networks (DNNs) will be at the point of capture in an edge computing network, soaring from less than 5% in 2019. And by year-end 2023, 50% of large enterprises will have a documented edge computing strategy, compared to less than 5% in 2020. As a result, the worldwide edge computing market will reach $250.6 billion in 2024, attaining a compound annual growth rate (CAGR) of 12.5% between 2019 and 2024.

Of the many sessions at GTC 2022 that included edge computing, one specifically grabbed attention in this area: Automating Industrial Inspection with Deep Learning and Computer Vision. The presentation provided an overview of how edge computing can improve manufacturing performance with real-time insights and alerts.

Real-time production and process data interpreted at the edge is proving effective in predicting machinery repair and refurbishment rates already.

Edge computing-based models successfully predicted yield rates for the resin class and machine combination.

Nvidia sees the opportunity to expand its total available market with an integrated platform aimed at streamlining predictive maintenance. Today, many manufacturers and service organizations struggle to gain the insights they need to reduce downtimes, further expanding the total available market.

For many providers that sell the time their machinery and assets are available, predictive maintenance and MRO are central to their business models.

As asset-heavy service industries, including airlines and others, face higher fuel costs and more challenges in operating profitably, AI-based predictive maintenance will become the new technology standard.

Nvidias decision to concentrate architectural investments in edge computing to drive predictive maintenance is prescient of where the market is going.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Go here to read the rest:

How Nvidia is harnessing AI to improve predictive maintenance - VentureBeat

Posted in Ai | Comments Off on How Nvidia is harnessing AI to improve predictive maintenance – VentureBeat

The LinkedIn controversy over AI-generated accounts, explained – TRT World

Posted: at 2:36 am

The technology primarily used to spread misinformation has now crept into the corporate world to ramp up sales, a new investigation finds.

When Renee DiResta, a Stanford Internet Observatory researcher, received a software sales pitch on LinkedIn, she didnt know that it would lead her down a rabbit hole of over 10,000 fake corporate accounts of LinkedIn.

With knowledge of information systems and how narratives spread, DiRestas trained eye was quick to notice something was not quite right the profile picture of the sender, Keenan Ramsey, looked off. Her eyes were centred, she was missing an earring in one ear, and some of her hair seemed to blur into the background.

The researcher, along with her colleague Josh Goldstein, began digging into Ramseys profile only to find that she was not a real person, and over thousands of other accounts on the website, which appeared to be generated by artificial intelligence (AI) technology, didn't exist in real life either.

Who created the profiles?

An investigation by NPR, the public radio network of the United States, found that it is a tactic now being deployed by companies on LinkedIn to ramp up their sales.

When the Stanford researcher DiResta responded to an AI-generated salespersons message, she was finally contacted by a real employee to continue the conversation.

NPR says the AI-generated profiles allow companies to reach more potential customers without hitting LinkedIns message limit. It also eludes the necessity to hire more sales staff to reach customers.

The investigation spotted more than 70 businesses that used fake profiles. Several companies said they hired third party marketers to help with sales but they had not authorised any use of AI-created profile photos and were surprised by the findings.

While the investigation couldnt spot who authorised the usage of fake profiles to send messages to users on the website, nor any illegal activity, it did, however, conclude that the usage of fake profiles being used by companies illustrates how technology used to spread misinformation and propaganda has now made its way to the corporate world.

It's not a story of mis-[information] or dis-[information], but rather the intersection of a fairly mundane business use case w/AI technology [sic], and the resulting questions of ethics & expectations, the Stanford researcher DiResta reacted to the investigation in a tweet thread.

What are our assumptions when we encounter others on social networks? What actions cross the line to manipulation? she asked.

The researchers also notified LinkedIn about their findings. The company said it removed the fake accounts for breaking its policies after an investigation.

"Our policies make it clear that every LinkedIn profile must represent a real person. We are constantly updating our technical defences to better identify fake profiles and remove them from our community, as we have in this case," LinkedIn spokesperson Leonna Spilman said in a statement.

"At the end of the day, it's all about making sure our members can connect with real people, and we're focused on ensuring they have a safe environment to do just that."

Trustworthy faces

The fake profiles on the website or elsewhere in the online sphere are not easy to detect. The investigation says what created fake salesperson profiles on LinkedIn is likely to be a generative adversarial network, or GAN a technology that improves itself each day. Since its launch in 2014, it has been analysing datasets obtained from pictures of real people online in order to create more realistic images.

"If you ask the average person on the internet, 'Is this a real person or synthetically generated?' they are essentially at chance, (relying on luck)" said Hany Farid, an expert in digital media forensics at the University of California, Berkeley, who co-authored a study with Sophie Nightingale of Lancaster University.

Farid's study previously found that AI-generated photos were designed to look more trustworthy than real faces.

Some methods may help regular internet users spot such AI-generated online content. One of them is V7 Labss Google Chrome extension tool, which helps users spot fake profiles.

However, many people are unlikely to even suspect that the profiles they come across may be fake.

Farid said he finds the proliferation of AI-generated content worrying, not just the still images but also the video and audio content. He warned that it could foreshadow a new era of online deception.

Source: TRT World

See the article here:

The LinkedIn controversy over AI-generated accounts, explained - TRT World

Posted in Ai | Comments Off on The LinkedIn controversy over AI-generated accounts, explained – TRT World

AWS teams with THREAD for AI-enabled clinical trials management – Healthcare IT News

Posted: at 2:36 am

THREAD, which develops technology and offers consulting services for decentralized clinical trials, announced a new collaboration this week with Amazon Web Services.

WHY IT MATTERSAWS will help develop new enhancements for the THREAD platform, bringing scalable automation and built-in AI to enabling faster and more efficient trials by enabling higher quality data capture across the lifecycle of a clinical study.

In addition to improving access for research participants, the companies say they hope the collaboration will speed up the ability to offer and initiate co-created and configured trials by reducing the start-up time to onboard customers by up to 30%.

Another goal is to enable customers to reduce inefficiencies by 30% and achieve up to 25% cost savings when pre-completing data, significantly reducing data capture and removing source data verification.

The hope is to provide a more comprehensive view of participant data across studies with enhanced security, AI support and operational controls also to help customers to more precisely assess studies' success by enabling real-time visibility into richer data streams, real-time grades on study performance.

THREAD is working with AWS Professional Services to design advanced machine learning architecture and AI models to automate processes for real-time data capture, auto-populating data workflows and more.

THE LARGER TRENDThere's been big momentum for AI and machine learning in clinical trial management, especially since the pandemic, in the U.S. and around the world.

This past October, Cerner launched Enviza, a new operating unit focused on innovating new approaches to automated data management and expanding participation in clinical trials.

That same month, we offered an inside look at how Intel and ConsenSys Health are combining blockchain and AI for clinical trials management.

ON THE RECORD"The breadth and depth of AWS's machine learning and cloud capabilities will help support THREAD teams as they work to automate processes, reduce inefficiencies, and monitor and support clinical trials," said Dan Sheeran, general manager, healthcare and life sciences at Amazon Web Services, in a statement.

"In collaboration with Amazon Web Services, we are further scaling our DCT platform with next-level automation, AI/ML offerings, and optimized features focused to meet the evolving needs of our customers, research sites, and participants," added THREAD CEO John Reites.

Twitter:@MikeMiliardHITNEmail the writer:mike.miliard@himssmedia.comHealthcare IT News is a HIMSS publication.

Follow this link:

AWS teams with THREAD for AI-enabled clinical trials management - Healthcare IT News

Posted in Ai | Comments Off on AWS teams with THREAD for AI-enabled clinical trials management – Healthcare IT News

For AI assistants to move forward, Siri and Alexa need to die – The Next Web

Posted: at 2:36 am

Its never easy saying goodbye. But its obvious that the time has come. We need to ditch big techs virtual assistants and calmly demand a little more autonomy in our AI.

Up front: The dream has always been to make personal assistants accessible to everyone. Since most of us cant afford our own human assistant, big tech decided to combine chatbots and natural language processing (NLP) to create a virtual version of the real thing.

Billions of people use these AI-powered tools everyday. Whether its Siri on iPhone, Google Assistant on Android, or Alexa on Amazon products, theres a good chance at least one of them has become a part of your everyday life.

So why on Earth would anyone want to get rid of them? Because you deserve so much better.

Background: Virtual assistants were supposed to evolve over time. Yet all weve seen in the past five years is fine-tuning and tweaks. Back in 2018, Google Assistant sometimes struggled to understand me. Now it usually catches what Im saying.

And as nice as it is to use voice-control to play music, turn the lights on, and send a text message, AI-powered voice assistants are too busy collecting data and pretending to have agency to do anything truly useful.

The root of the problem is, unlike a human assistant with an NDA, you cant trust big techs AI.

The reality: Current virtual assistants all live on servers. The companies who build and train the AI models that power them use your data to make them better. It feels like a win-win because the more you use your virtual assistants, the better they become for everyone.

But Google, Amazon, Apple, and all the others use the data you generate when you use your virtual assistants to train AI models across their respective companies.

The AI models powering virtual assistants arent designed to provide the best possible assistant experience for users theyre designed to harvest data. Theyre biased towards features and capabilities that funnel the most useful data upstream. Essentially, theyre Candy Crush. But instead of your attention, they want your data.

And, because these AI assistants have to service billions of humans across millions of possible linguistic, cultural, software, hardware, and networking platforms, theres no incentive for big tech to build models that conform to individual users needs. They do what they do and if that works for you, great. If not: you can choose not to use them.

But thats not how human assistants work. A good human assistant knows how to focus on a client and adapt to their needs.

The solution: Make virtual assistants personal. Theres nothing stopping big tech, or an enterprising startup, from building AI systems that operate completely offline.

When the first generation of modern virtual assistants began showing up on smart speakers and flagship phones, they were relaying most of their processes from the cloud. Now, the majority of these devices have onboard AI chips complementing their processors.

At this point, we could have virtual assistants baked into our devices capable of performing every function they currently can, without the need to send personally indentifiable data to a remote server.

Imagine, if you will, an open-source neural network built on self-supervised learning algorithms. Once you installed it on your device, it would essentially function as a medium between you and the digital world.

Assuming you were able to trust the AI, you could essentially give it power of attorney over your digital affairs. All it would take is a private networking protocol running through blockchain-based authentication.

And, most importantly of all, we could ditch the silly human personifications for virtual assistants.Without the need to go through the rigmarole of summoning a specific assistant, you could just talk to your gadgets, software, and web browser like the objects they are. TV on. TV off.

As weird as it is to imagine in 2022, the only way for AI assistant technology to move forward is to kill the virtual-person-as-a-serviceparadigm and replace it with one where assistance is platformed through privacy and trust.

Original post:

For AI assistants to move forward, Siri and Alexa need to die - The Next Web

Posted in Ai | Comments Off on For AI assistants to move forward, Siri and Alexa need to die – The Next Web

Intel’s XeSS AI upscaling won’t be available until sometime in ‘early summer’ – The Verge

Posted: at 2:36 am

Intels first Arc GPUs are launching today, but one of the biggest features for the new discrete graphics cards will be absent when the first Arc-powered laptops arrive: the companys XeSS AI-powered upscaling technology, which Intel says wont be available until sometime in early summer.

XeSS is meant to compete with other AI-based upscaling techniques, like Nvidias own Deep Learning Super Sampling (DLSS), and promises to offer players better framerates without compromising on quality. (AMDs FidelityFX Super Resolution aims to offer similar results, too, but doesnt use the same super sampling methods as Intel and Nvidia.)

The goal of XeSS is similar to other upscaling techniques, offering players 4K-quality visuals without having to deal with the far more demanding hardware and power requirements for running actual 4K gameplay in real time (something that only the most powerful and pricey graphics cards like the RTX 3090 or the RX 6800 can really achieve right now).

Like Nvidias DLSS, games have to be specifically optimized to work with XeSS, with Intel touting a list of titles that includes Death Stranding: Directors Cut, Legend of the Tomb Raider, Ghostwire Tokyo, Chorus, Hitman 3, and more. And while all those games are already out, the fact that XeSS wont arrive until the summer means that you wont get Intels upscaling benefits on them even if you do pick up an Arc-powered laptop now.

The silver lining is that the bulk of Intels Arc products including its more powerful laptop GPUs and its long-awaited desktop cards also wont be arriving until later on in the year. Intels launch today is just for the companys least powerful Arc 3 discrete graphics for laptops. But hopefully, by the time XeSS does arrive in early summer, Intel will have some more powerful GPUs waiting for it.

See the original post here:

Intel's XeSS AI upscaling won't be available until sometime in 'early summer' - The Verge

Posted in Ai | Comments Off on Intel’s XeSS AI upscaling won’t be available until sometime in ‘early summer’ – The Verge

Six startups join Digital Catapult to lead the AI net zero era – The Manufacturer

Posted: March 17, 2022 at 3:08 am

The latest group of innovative startups has joined Digital Catapults Machine Intelligence Garage programme - an artificial intelligence and machine learning accelerator that has supported more than 100 startups to raise a total of 52m investment.

The most recent cohort of leading-edge startups are focused on solving urgent challenges in the manufacturing, engineering and agriculture sectors; from reducing manufacturing material wastage and cutting emissions, to providing powerful image recognition that helps farmers identify crop disease and use land and resources more sustainably.

From dozens of applications, six early-stage companies that are developing net zero solutions using Artificial Intelligence (AI) and Machine Learning (ML) were selected to join the programme, working alongside experts at Digital Catapult to address the UK industrys biggest sustainability challenges.

Launched in 2017, Machine Intelligence Garage helps early-stage businesses access the computation power and expertise they need to develop and build AI/ML solutions, something thats often inaccessible to startups. The programme has gone from strength-to-strength and supported rising stars in the AI/ML ecosystem by removing key barriers to innovation.

The next cohort of cutting edge startups are:

Robert Smith, Director of Artificial Intelligence, Digital Catapult commented: With Machine Intelligence Garage now well established in the market, were continuing to see more and more high-growth, high-potential startups applying for the programme. Increasingly, these companies are using powerful artificial intelligence and machine learning technologies to develop solutions to help the UK reach its ambitious net zero goals.

I believe the speed and scale of AI can play a critical role in addressing global challenges like the climate emergency, and the six startups that make up this latest cohort are just some of the brightest and most interesting innovators leading the charge for planet earth.

Each member of the latest cohort will be pitching their solutions to investors and industry as part of Digital Catapults FutureScope Showcase: Net Zero, a free to attend online event that showcases the trailblazing companies of Digital Catapults programmes.

Read the original here:

Six startups join Digital Catapult to lead the AI net zero era - The Manufacturer

Posted in Ai | Comments Off on Six startups join Digital Catapult to lead the AI net zero era – The Manufacturer

CIOs dish on AI and automation strategies that work – Healthcare IT News

Posted: at 3:08 am

Artificial intelligence and machine learning are already making some intriguing and potentially transformative impacts on the way healthcare is delivered, from the exam room to the diagnosis to ongoing care management and beyond.

But it's important too to keep the promise and limitations of automation and augmented intelligence in mind. At HIMSS22, three clinical and IT leaders from major health systems offered some insights into how they're deploying AI and ML from predictive analytics to EHR automation to value-based care and population health management.

Each emphasized that, despite the huge potential, it's still early days.

Importantly, it's key to not get too excited about the technology itself and wishcasting about the wide array of challenges it can solve, but to focus instead on smaller, discrete, achievable use cases, said Jason Joseph, chief digital and information officer at Michigan-based BHSH System.

"I think we've got to look at this area of deep analytics more holistically, with AI being a piece of it but really focusing instead on what problems we're trying to solve, not necessarily the AI," he said.

Dustin Hufford, CIO at New Jersey's Cooper University Health Care, is also taking a cautiously optimistic view, and moving slowly and deliberately in its AI implementations.

"It's certainly something that we really need to think about in terms of the safety around AI and the equity part of it: Are we building our own biases into the software that we're building?"

But there's no denying that "this is really gearing up right now," he said. At Cooper University, "we focus on governance around digital, which includes a lot of our AI technologies that we're going to implement in the next couple of years."

Ensuring C-suite buy in is also key, he said. "How do we engage the highest levels of the organization in the planning and understanding of what we're looking at here? We spent a lot of time last year understanding how the mechanics were going to work, the transparency, and now we're getting into the nitty gritty of it."

Step two, said Hufford, "is to really define what are those exact things when it comes to something like AI? What's the exact thing that we need to measure to make sure we're hitting the mark on this?"

Dr. Nick Patel, chief digital officer at Prisma Health in South Carolina, sees a lot to like when it comes to small-AI use cases like workflow automation.

"We as providers are constantly doing repetitive activities that can be automated over and over again," he said. "I didn't go to medical school to click my way through taking care of patients.

"Medical school is all about gathering information, learning about anatomy, physiology, disease states, and then applying that to humankind in order to get them to their goals and keep them well," he explained. "But when you throw a layer of EHR in there, you lose a lot of that because you're having to snip into how do you get all this information so you can make a good clinical decision?"

At Prisma Health which is also undergoing a larger digital transformation that Patel will talk about Thursday at HIMSS22 the question is "how do we start to automate those processes, so we're actually using our neurons to actually take care of patients, not trying to figure out the system?"

Bigger-picture, Patel is more excited about AI-enabled analytics for population health.

"A typical physician, their panel is about 1,500 or 2,000 patients. You can't really make a huge impact in the national narrative when it comes to population health, when you're seeing 2,000 patients per year. So what we have to start to think about is we use bigger data."

But it has to be "cleaner data," he added. "You can't layer in machine learning and AI, all these advanced tools until you make sure that the actual data is actually aligned and clear. Because if you do, then you're going to get insights based on false data and that is extremely dangerous."

The main thing here, from a governance standpoint, "is to really understand, what are all your style of data pieces, what are the discrete and non discrete data platforms, and how does that all converge?" said Patel.

"When you think about diabetes and hypertension, what parts can you automate? I would venture to say as high as 80%. Using the right data in order to get the right insights to the providers so they can be empowered to take care of those patients better."

Joseph says he's similarly optimistic about the prospect of AI-empowered value-based care.

"I'm going to differentiate maybe machine learning and predictive analytics from true AI," he said. "There is a huge opportunity there to really start to understand what are the drivers for the population.

"For example, diabetes and hypertension some of those are either under-diagnosed or, if they are diagnosed, there are no interventions. What you need is the ability to surface that stuff, based on the data that's sitting there at rest, surface it and push it forward.

"And that all has to be run through some analytics," he said. "You can do it based on rules you know, if you've got all those rules. And in other cases, you can look at historical patterns, which is where I think you could start to introduce some AI that's just looking at the trends that exist out there using the data you have, which is better than nothing."

But in all of those situations, what we're really talking about is using augmented intelligence," said Joseph. "What we're talking about is very imprecise AI. At this point, it's going to give you maybe an, 'Eh, start here, start here.' But as we get more advanced with our clean, cleanliness of data, we start capturing more data, we start to get more and more precise to the point where it could become fully automated."

At the moment, he said. "I'd rather know a 60% chance than not know anything. And if I can look at an image and say this thing is 82% likely to have this diagnosis, well, that ought to help these radiologists make a diagnosis. Over time, you could probably get that to 90%, 93%, 95%, 98%."

But soon, the ethical challenges may increase as the technology evolves.

"At some point we'll have an ethical decision about when the computer makes the diagnosis. Then the next step will be for the computers to prescribe medication, or order the procedure."

But "that's going to take us years," said Joseph. "What we need to be doing along the way is making our systems better, making our processes better, making sure the data is cleaner, and introducing these things along the way so that [the models] can learn and be more accurate over time."

Twitter:@MikeMiliardHITNEmail the writer:mike.miliard@himssmedia.comHealthcare IT News is a HIMSS publication.

Originally posted here:

CIOs dish on AI and automation strategies that work - Healthcare IT News

Posted in Ai | Comments Off on CIOs dish on AI and automation strategies that work – Healthcare IT News

AI Used to Fill in Missing Words in Ancient Writings – VOA Learning English

Posted: at 3:08 am

Researchers have developed an artificial intelligence (AI) system to help fill in missing words in ancient writings.

The system is designed to help historians restore the writings and identify when and where they were written.

Many ancient populations used writings, also known as inscriptions, to document different parts of their lives. The inscriptions have been found on materials such as rock, ceramic and metal. The writings often contained valuable information about how ancient people lived and how they structured their societies.

But in many cases, the objects containing such inscriptions have been damaged over the centuries. This left major parts of the inscriptions missing and difficult to identify and understand.

In addition, many of the inscribed objects were moved from areas where they were first created. This makes it difficult for scientists to discover when and where the writings were made.

The new AI-based method serves as a technological tool to help researchers repair missing inscriptions and estimate the true origins of the records.

The researchers, led by Alphabets AI company DeepMind, call their tool Ithaca. In a statement, the researchers said the system is the first deep neural network that can restore the missing text of damaged inscriptions. A neural network is a machine learning computer system built to act like the human brain.

The findings were recently reported in a study in the publication Nature. Researchers from other organizations including the University of Oxford, Ca Foscari University of Venice and Athens University of Economics and Business also took part in the study.

The team said it trained Ithaca on the largest collection of data containing Greek inscriptions from the non-profit Packard Humanities Institute in California. Feeding this data into the system is designed to help the tool use past writings to predict missing letters and words in damaged inscriptions.

The researchers reported that in experiments with damaged writings, Ithaca was able to correctly predict missing inscription elements 62 percent of the time. In addition, the tool was 71 percent correct in identifying where the inscriptions first came from. And, the system was able to effectively date writings to within 30 years, the team said.

Yannis Assael is a research scientist with DeepMind who helped lead the study. He said in a statement that Ithaca was designed to support historians to expand and deepen our understanding of ancient history.

When historians work on their own, the success rate for restoring damaged inscriptions is about 25 percent. But when humans teamed up with Ithaca to assist in their work, the success rate jumped to 72 percent, Assael said.

Thea Sommerschield was another lead researcher on the project. She is the Marie Curie Fellow at Ca Foscari University of Venice. Sommerschield said she hopes systems like Ithaca can unlock the cooperative potential between AI and humans in future restoration work involving important ancient inscriptions.

She said the system had already provided new information to help researchers reexamine important periods in Greek history.

In one case, Ithaca confirmed new evidence presented by historians about the dating of a series of important Greek decrees. The decrees were first thought to have been written before 446/445 BCE. But the new evidence suggested a date in the 420s BCE. Ithaca predicted a date of 421 BCE.

Sommerschield said that the date change may seem small. But it has significant implications for our understanding of the political history of Classical Athens, she added.

The team is currently working on other versions of Ithaca trained on other ancient languages. DeepMind has launched a free, interactive tool based on the system for use by researchers, educators, museum workers and the public.

Im Bryan Lynn.

Bryan Lynn wrote this story for VOA Learning English, based on reports from DeepMind, the University of Oxford, the University of Venice and Nature.

We want to hear from you. Write to us in the Comments section, and visit our Facebook page.

_______________________________________________________

artificial intelligence (AI) n. the development of computer systems with the ability to perform work that normally requires human intelligence

restore v. to make something good exist again

ceramic n. objects made by shaping and heating clay

society n. a large group of people who live in the same country and share the same laws, traditions, etc.

origins n. the cause of something or where something comes from

potential n. a possibility when the necessary conditions exist

decree n. an official order for something

significant adj. important or noticeable

implication n. a result or effect

See the article here:

AI Used to Fill in Missing Words in Ancient Writings - VOA Learning English

Posted in Ai | Comments Off on AI Used to Fill in Missing Words in Ancient Writings – VOA Learning English

There’s more to AI Bias than biased data, NIST report highlights – YubaNet

Posted: at 3:08 am

As a step toward improving our ability to identify and manage the harmful effects of bias in artificial intelligence (AI) systems, researchers at the National Institute of Standards and Technology (NIST) recommend widening the scope of where we look for the source of these biases beyond the machine learning processes and data used to train AI software to the broader societal factors that influence how technology is developed.

The recommendation is a core message of a revised NIST publication,Towards a Standard for Identifying and Managing Bias in Artificial Intelligence(NIST Special Publication 1270), which reflects public comments the agency received on itsdraft versionreleased last summer. As part of alarger effortto support the development of trustworthy and responsible AI, the document offers guidance connected to theAI Risk Management Frameworkthat NIST is developing.

According to NISTs Reva Schwartz, the main distinction between the draft and final versions of the publication is the new emphasis on how bias manifests itself not only in AI algorithms and the data used to train them, but also in the societal context in which AI systems are used.

Context is everything, said Schwartz, principal investigator for AI bias and one of the reports authors. AI systems do not operate in isolation. They help people make decisions that directly affect other peoples lives. If we are to develop trustworthy AI systems, we need to consider all the factors that can chip away at the publics trust in AI. Many of these factors go beyond the technology itself to the impacts of the technology, and the comments we received from a wide range of people and organizations emphasized this point.

Bias in AI can harm humans. AI can make decisions that affect whether a person is admitted into a school, authorized for a bank loan or accepted as a rental applicant. It is relatively common knowledge that AI systems can exhibit biases that stem from their programming and data sources; for example, machine learning software could be trained on a dataset that underrepresents a particular gender or ethnic group. The revised NIST publication acknowledges that while thesecomputational and statisticalsources of bias remain highly important, they do not represent the full picture.

A more complete understanding of bias must take into accounthuman and systemicbiases, which figure significantly in the new version. Systemic biases result from institutions operating in ways that disadvantage certain social groups, such as discriminating against individuals based on their race. Human biases can relate to how people use data to fill in missing information, such as a persons neighborhood of residence influencing how likely authorities would consider the person to be a crime suspect. When human, systemic and computational biases combine, they can form a pernicious mixture especially when explicit guidance is lacking for addressing the risks associated with using AI systems.

If we are to develop trustworthy AI systems, we need to consider all the factors that can chip away at the publics trust in AI. Many of these factors go beyond the technology itself to the impacts of the technology. Reva Schwartz,principal investigator for AI bias

To address these issues, the NIST authors make the case for a socio-technical approach to mitigating bias in AI. This approach involves a recognition that AI operates in a larger social context and that purely technically based efforts to solve the problem of bias will come up short.

Organizations often default to overly technical solutions for AI bias issues, Schwartz said. But these approaches do not adequately capture the societal impact of AI systems. The expansion of AI into many aspects of public life requires extending our view to consider AI within the larger social system in which it operates.

Socio-technical approaches in AI are an emerging area, Schwartz said, and identifying measurement techniques to take these factors into consideration will require a broad set of disciplines and stakeholders.

Its important to bring in experts from various fields not just engineering and to listen to other organizations and communities about the impact of AI, she said.

NIST is planning a series of public workshops over the next few months aimed at drafting a technical report for addressing AI bias and connecting the report with the AI Risk Management Framework. For more information and to register, visit theAI RMF workshop page.

The rest is here:

There's more to AI Bias than biased data, NIST report highlights - YubaNet

Posted in Ai | Comments Off on There’s more to AI Bias than biased data, NIST report highlights – YubaNet

Run:ai Seeks to Grow AI Virtualization with $75M Round – Datanami

Posted: at 3:08 am

Run:ai, a provider of an AI virtualization layer that helps optimize GPU instances, yesterday announced a Series C round worth $75 million. The funding figures to help the fast-growing company expand its sales reach and further development the platform.

GPUs are the beating heart of deep learning today, but the limited nature of the computing resource means AI teams are constantly battling to squeeze the most work out of them. Thats where Run:ai steps in with its flagship product, dubbed Atlas, which provides a way for AI teams to get more bang for their GPU buck.

We do for AI hardware what VMware and virtualization did for traditional computingmore efficiency, simpler management, greater user productivity, Ronen Dar, Run:ais CTO and co-founder, says in a press release. Traditional CPU computing has a rich software stack with many development tools for running applications at scale. AI, however, runs on dedicated hardware accelerators such as GPUs which have few tools to help with their implementation and scaling.

Atlas abstracts AI workloads away from GPUs by creating virtual pools where GPU resources can be automatically and dynamically allocated, thereby gaining more efficiency from GPU investments, the company says.

The platform also brings queuing and prioritization methods to deep learning workloads running on GPUs, and develops fairness algorithms to ensure users have an equal chance at getting access to the hardware. The companys software also enables clusters of GPUs to be managed as a single unit, and also allows a single GPU to be broken up into fractional GPUs to ensure better allocation.

Atlas functions as a plug-in to Kubernetes, the open source container orchestration system. Data scientists can get access to Atlas via integration to IDE tools like Jupyter Notebook and PyCharm, the company says.

The abstraction brings greater efficiency to data science teams who are experimenting with different techniques and trying to find what works. According to a December 2020 Run:ai whitepaper, one customer was able to reduce their AI training time from 46 days to about 36 hours, which represents a 3,000% improvement, the company says.

With Run:ai Atlas, weve built a cloud-native software layer that abstracts AI hardware away from data scientists and ML engineers, letting Ops and IT simplify the delivery of compute resources for any AI workload and any AI project, Dar continues.

The Tel Aviv company, which was founded in 2018, has experienced a 9x increase in annual recurring revenue (ARR) over the past 12 months, during which time the companys employee count has tripled. The company has also quadrupled its customer base over the past two years. The Series C round, which brings the companys total funding to $118 million, will be used to grow sales as well as enhancing its core platform.

When we founded Run:ai, our vision was to build the de- facto foundational layer for running any AI workload, says Omri Geller, Run:ai CEO and co-founder in the press release. Our growth has been phenomenal, and this investment is a vote of confidence in our path. Run:ai is enabling organizations to orchestrate all stages of their AI work at scale, so companies can begin their AI journey and innovate faster.

Run:ais platform and growth caught the eyes of Tiger Global Management, which co-led the Series C round with Insight Partners, which led the Series B round. Other firms participating in the current round included existing investors TLV Partners and S Capital VC.

Run:ai is well positioned to help companies reimagine themselves using AI, says Insight Partners Managing Director Lonne Jaffe, who you might remember was the CEO of Syncsort (now Precisely) nearly a decade ago.

As the Forrester Wave AI Infrastructure report recently highlighted, Run:ai creates extraordinary value by bringing advanced virtualization and orchestration capabilities to AI chipsets, making training and inference systems run both much faster and more cost-effectively, Jaffe says in the press release.

In addition to AI workloads, Run:ai can also be used to optimize HPC workloads.

Related Items:

Optimized Machine Learning Libraries For CPUS Exceed GPU Performance

Optimizing AI and Deep Learning Performance

AI Hypervisor Gets a GPU Boost

Read more here:

Run:ai Seeks to Grow AI Virtualization with $75M Round - Datanami

Posted in Ai | Comments Off on Run:ai Seeks to Grow AI Virtualization with $75M Round – Datanami

Page 54«..1020..53545556..6070..»