Page 29«..1020..28293031..4050..»

Category Archives: Artificial Intelligence

Saluki Pride: Jim Nelson makes analytics and artificial intelligence understandable, usable and relevant – This Is SIU – Southern Illinois University

Posted: June 30, 2022 at 9:52 pm

Jim Nelson, an associate professor and coordinator of the analytics program in the School of Analytics, Finance and Economics and the director of the Pontikes Center for Advanced Analytics and Artificial Intelligence, is largely responsible for putting the analytics in SIUs College of Business and Analytics, and hes introducing analytics and artificial intelligence to students in relevant ways, according to colleagues and students.

Nelson was instrumental in spearheading the development of both the undergraduate and graduate analytics programs within the college, according to Kevin Sylwester, interim director of the School of Analytics, Finance and Economics. Nelson has reorganized and revitalized the Pontikes Center, too. He delivers complicated analytics and artificial intelligence content to his students in ways that make sense and are accessible, they say.

It is evident that Dr. Nelson is passionate about the strategic analytics program and the students in it, said Elizabeth Taylor, a student who has taken several of Nelsons classes.

She said his passion about analytics and artificial intelligence is obvious during his lectures, even during online discussions, and that enthusiasm is contagious, even when the topics could be perceived as technical or boring.

He brings the material to life and makes it relevant with real-world examples, she added, and noted that he is empathetic and caring with his students, responsive to their emails and seeks their feedback on how to make his classes even better.

Get to know Jim Nelson

Name: Jim Nelson

Department/title:School of Analytics, Finance, and Economics in the College of Business and Analytics, analytics program coordinator, associate professor and director of the Pontikes Center for Advanced Analytics and Artificial Intelligence

Years at SIU Carbondale:17

Give us the elevator pitch for your job.

I create business leaders who are able to bridge the gap between the massive amounts of data collected by organizations and creating solutions to real business problems. My research follows this as I work with real companies that are striving for new ways to solve business problems and create new strategies using the combination of analytics and artificial intelligence.

What is your favorite part of your job?Learning new stuff. Seriously in my research and in my teaching, I always have to keep up with the latest and greatest advances in technology and business practice. Things are moving so fast that I have to keep up so that my students have the best preparation possible for making a difference in the real world.

Why did you choose SIU?The College of Business, as it was called at the time, has a world-class faculty and outstanding reputation.Thats what brought me here. What keeps me here are the students and the university leadership. The amazing diversity of backgrounds and experiences really makes my teaching a lot of fun. From first-generation college students to business people who have been working for many years, Im always learning something. The other part is the university leadership. Most universities are very set in their ways, and its hard to change. Having the ability to come up with an idea and then run with it, and then make it a reality is something really rare. The colleges pivot to analytics and artificial intelligence was amazingly fast, and how we implemented our new analytics programs was truly wonderful. Far from filling out a form and waiting a few years for an answer, we went from nothing to a set of world-class analytics programs in just a couple of years, making us the first business college in the country to combine analytics and AI. We are now the College of Business and Analytics. Thats pretty amazing.

My fondest memory as a child isWalking the beach on Midway Island and finding glass Japanese fish floats that washed ashore. I still have those floats, and they are proudly displayed in my home.

My favorite meal is:Peeps. Im not sure those are food, but they really are great.

If you are a collector, what do you collect and why, and how did you get started?Vintage aircraft instruments and memorabilia. I fly my Cessna 170, where I do some of my best thinking 5,000 feet in the air, and I cant throw anything out. Minerals and geodes. Totally cool- looking. Vintage computer parts. It started as classroom show and tell and to mark the evolution of my discipline.

Know a colleague to feature in Saluki Pride? Simplyfill out this form.

Read this article:

Saluki Pride: Jim Nelson makes analytics and artificial intelligence understandable, usable and relevant - This Is SIU - Southern Illinois University

Posted in Artificial Intelligence | Comments Off on Saluki Pride: Jim Nelson makes analytics and artificial intelligence understandable, usable and relevant – This Is SIU – Southern Illinois University

Taking the guesswork out of dental care with artificial intelligence – MIT News

Posted: at 9:52 pm

When you picture a hospital radiologist, you might think of a specialist who sits in a dark room and spends hours poring over X-rays to make diagnoses. Contrast that with your dentist, who in addition to interpreting X-rays must also perform surgery, manage staff, communicate with patients, and run their business. When dentists analyze X-rays, they do so in bright rooms and on computers that arent specialized for radiology, often with the patient sitting right next to them.

Is it any wonder, then, that dentists given the same X-ray might propose different treatments?

Dentists are doing a great job given all the things they have to deal with, says Wardah Inam SM 13, PhD 16.

Inam is the co-founder of Overjet, a company using artificial intelligence to analyze and annotate X-rays for dentists and insurance providers. Overjet seeks to take the subjectivity out of X-ray interpretations to improve patient care.

Its about moving toward more precision medicine, where we have the right treatments at the right time, says Inam, who co-founded the company with Alexander Jelicich 13. Thats where technology can help. Once we quantify the disease, we can make it very easy to recommend the right treatment.

Overjet has been cleared by the Food and Drug Administration to detect and outline cavities and to quantify bone levels to aid in the diagnosis of periodontal disease, a common but preventable gum infection that causes the jawbone and other tissues supporting the teeth to deteriorate.

In addition to helping dentists detect and treat diseases, Overjets software is also designed to help dentists show patients the problems theyre seeing and explain why theyre recommending certain treatments.

The company has already analyzed tens of millions of X-rays, is used by dental practices nationwide, and is currently working with insurance companies that represent more than 75 million patients in the U.S. Inam is hoping the data Overjet is analyzing can be used to further streamline operations while improving care for patients.

Our mission at Overjet is to improve oral health by creating a future that is clinically precise, efficient, and patient-centric, says Inam.

Its been a whirlwind journey for Inam, who knew nothing about the dental industry until a bad experience piqued her interest in 2018.

Getting to the root of the problem

Inam came to MIT in 2010, first for her masters and then her PhD in electrical engineering and computer science, and says she caught the bug for entrepreneurship early on.

For me, MIT was a sandbox where you could learn different things and find out what you like and what you don't like, Inam says. Plus, if you are curious about a problem, you can really dive into it.

While taking entrepreneurship classes at the Sloan School of Management, Inam eventually started a number of new ventures with classmates.

I didn't know I wanted to start a company when I came to MIT, Inam says. I knew I wanted to solve important problems. I went through this journey of deciding between academia and industry, but I like to see things happen faster and I like to make an impact in my lifetime, and that's what drew me to entrepreneurship.

During her postdoc in the Computer Science and Artificial Intelligence Laboratory (CSAIL), Inam and a group of researchers applied machine learning to wireless signals to create biomedical sensors that could track a persons movements, detect falls, and monitor respiratory rate.

She didnt get interested in dentistry until after leaving MIT, when she changed dentists and received an entirely new treatment plan. Confused by the change, she asked for her X-rays and asked other dentists to have a look, only to receive still another variation in diagnosis and treatment recommendations.

At that point, Inam decided to dive into dentistry for herself, reading books on the subject, watching YouTube videos, and eventually interviewing dentists. Before she knew it, she was spending more time learning about dentistry than she was at her job.

The same week Inam quit her job, she learned about MITs Hacking Medicine competition and decided to participate. Thats where she started building her team and getting connections. Overjets first funding came from the Media Lab-affiliated investment group the E14 Fund.

The E14 fund wrote the first check, and I don't think we would've existed if it wasn't for them taking a chance on us, she says.

Inam learned that a big reason for variation in treatment recommendations among dentists is the sheer number of potential treatment options for each disease. A cavity, for instance, can be treated with a filling, a crown, a root canal, a bridge, and more.

When it comes to periodontal disease, dentists must make millimeter-level assessments to determine disease severity and progression. The extent and progression of the disease determines the best treatment.

I felt technology could play a big role in not only enhancing the diagnosis but also to communicate with the patients more effectively so they understand and don't have to go through the confusing process I did of wondering who's right, Inam says.

Overjet began as a tool to help insurance companies streamline dental claims before the company began integrating its tool directly into dentists offices. Every day, some of the largest dental organizations nationwide are using Overjet, including Guardian Insurance, Delta Dental, Dental Care Alliance, and Jefferson Dental and Orthodontics.

Today, as a dental X-ray is imported into a computer, Overjets software analyzes and annotates the images automatically. By the time the image appears on the computer screen, it has information on the type of X-ray taken, how a tooth may be impacted, the exact level of bone loss with color overlays, the location and severity of cavities, and more.

The analysis gives dentists more information to talk to patients about treatment options.

Now the dentist or hygienist just has to synthesize that information, and they use the software to communicate with you, Inam says. So, they'll show you the X-rays with Overjet's annotations and say, 'You have 4 millimeters of bone loss, it's in red, that's higher than the 3 millimeters you had last time you came, so I'm recommending this treatment.

Overjet also incorporates historical information about each patient, tracking bone loss on every tooth and helping dentists detect cases where disease is progressing more quickly.

Weve seen cases where a cancer patient with dry mouth goes from nothing to something extremely bad in six months between visits, so those patients should probably come to the dentist more often, Inam says. Its all about using data to change how we practice care, think about plans, and offer services to different types of patients.

The operating system of dentistry

Overjets FDA clearances account for two highly prevalent diseases. They also put the company in a position to conduct industry-level analysis and help dental practices compare themselves to peers.

We use the same tech to help practices understand clinical performance and improve operations, Inam says. We can look at every patient at every practice and identify how practices can use the software to improve the care they're providing.

Moving forward, Inam sees Overjet playing an integral role in virtually every aspect of dental operations.

These radiographs have been digitized for a while, but they've never been utilized because the computers couldn't read them, Inam says. Overjet is turning unstructured data into data that we can analyze. Right now, we're building the basic infrastructure. Eventually we want to grow the platform to improve any service the practice can provide, basically becoming the operating system of the practice to help providers do their job more effectively.

Originally posted here:

Taking the guesswork out of dental care with artificial intelligence - MIT News

Posted in Artificial Intelligence | Comments Off on Taking the guesswork out of dental care with artificial intelligence – MIT News

Deep Dive Into Advanced AI and Machine Learning at The Behavox Artificial Intelligence in Compliance and Security Conference – Business Wire

Posted: at 9:52 pm

MONTREAL--(BUSINESS WIRE)--On July 19th, Behavox will host a conference to share the next generation of artificial intelligence in Compliance and Security with clients, regulators, and industry leaders.

The Behavox AI in Compliance and Security Conference will be held at the company HQ in Montreal. With this exclusive in-person conference, Behavox is relaunching its pre-COVID tradition of inviting customers, regulators, AI industry leaders, and partners to its Montreal HQ to deep dive into workshops and keynote speeches on compliance, security, and artificial intelligence.

Were extremely excited to relaunch our tradition of inviting clients to our offices in order to learn directly from the engineers and data scientists behind our groundbreaking innovations, said Chief Customer Intelligence Officer Fahreen Kurji. Attendees at the conference will get to enjoy keynote presentations as well as Innovation Paddocks where you can test drive our latest innovations and also spend time networking with other industry leaders and regulators.

Keynote presentations will cover:

The conference will also feature Innovation Paddocks where guests will be able to learn more from the engineers and data scientists behind Behavox innovations. At this conference, Behavox will demonstrate its revolutionary new product - Behavox Quantum. There will be test drives and numerous workshops covering everything from infrastructure for cloud orchestration to the AI engine at the core of Behavox Quantum.

Whats in it for participants?

Behavox Quantum has been rigorously tested and benchmarked against existing solutions in the market and it outperformed competition by at least 3,000x using new AI risk policies, providing a holistic security program to catch malicious, immoral, and illegal actors, eliminating fraud and protecting your digital headquarters.

Attendees at the July 19th conference will include C-suite executives from top global banks, financial institutions, and corporations with many prospects and clients sending entire delegations to the conference. Justin Trudeau, Canadian Prime Minister, will give the commencement speech at the conference in recognition/ celebration of the world leading AI innovations coming out of Canada.

This is a unique opportunity to test drive the product and meet the team behind the innovations as well as network with top industry professionals. Register here for the Behavox AI in Compliance and Security Conference.

About Behavox Ltd.

Behavox provides a suite of security products that help compliance, HR, and security teams protect their company and colleagues from business risks.

Through AI-powered analysis of all corporate communications, including email, instant messaging, voice, and video conferencing platforms, Behavox helps organizations identify illegal, immoral, and malicious behavior in the workplace.

Founded in 2014, Behavox is headquartered in Montreal and has offices in New York City, London, Seattle, Singapore, and Tokyo.

More information about the company is available at https://www.behavox.com/.

Originally posted here:

Deep Dive Into Advanced AI and Machine Learning at The Behavox Artificial Intelligence in Compliance and Security Conference - Business Wire

Posted in Artificial Intelligence | Comments Off on Deep Dive Into Advanced AI and Machine Learning at The Behavox Artificial Intelligence in Compliance and Security Conference – Business Wire

Can Artificial Intelligence Be Creative? – Discovery Institute

Posted: at 9:52 pm

Image: Lady Ada Lovelace (18151852), via Wikimedia Commons.

Editors note: We are delighted to present an excerpt from Chapter 2 of the new bookNon-Computable You: What You Do that Artificial Intelligence Never Will, by computer engineer Robert J. Marks, director of Discovery Institutes Bradley Center for Natural and Artificial Intelligence.

Some have claimed AI is creative. But creativity is a fuzzy term. To talk fruitfully about creativity, the term must be defined so that everyone is talking about the same thing and no one is bending the meaning to fit their purpose. Lets explore what creativity is, and it will become clear that, properly defined, AI is no more creative than a pencil.

Lady Ada Lovelace (18151852), daughter of the poet George Gordon, Lord Byron, was the first computer programmer, writing algorithms for a machine that was planned but never built. She also was quite possibly the first to note that computers will not be creative that is, they cannot create something new. She wrote in 1842 that the computer has no pretensions whatever to originate anything. It can do [only] whatever we know how to order it to perform.

Alan Turing disagreed. Turing is often called the father of computer science, having established the idea for modern computers in the 1930s. Turing argued that we cant even be sure that humans create, because humans do nothing new under the sun but they do surprise us. Likewise, he said, Machines take me by surprise with great frequency.So perhaps, he argued, it is the element of surprise thats relevant, not the ability to originate something new.

Machines can surprise us if theyre programmed by humans to surprise us, or if the programmer has made a mistake and thus experienced an unexpected outcome.Often, though, surprise occurs as a result of successful implementation of a computer search that explores a myriad of solutions for a problem. The solution chosen by the computer can be unexpected. The computer code that searches among different solutions, though, is not creative. The creativity credit belongs to the computer programmer who chose the set of solutions to be explored. One could give examples from computer searches for making the best move in the game of GO and for simulated swarms. Both results are surprising and unexpected, but there is no creativity contributed from computer code.

Alan Turing, an atheist, wanted to show we are machines and that computers could be creative. Turing equated intelligence with problem solving, did not consider questions of consciousness and emotion, and referred to people as human computers. Turings version of the imitation game was proposed to show that computers could duplicate the conversational human. This is why the biographical movie starring Benedict Cumberbatch as Turing was titledThe Imitation Game.

How can computers imitate humans, according to Turing?The imitation game (which came to be called the Turing test)simply asks whether, in a conversational exchange using text (that is, an exchange in which the participants are hidden from each other), a sufficiently sophisticated computer can be distinguished from a human. If a questioner gets lucid, human-sounding answers from the computer, and believes the computer is in fact a human typing in answers from another room, then the test has been passed. (Incidentally, the converse of the Turing test is easy. Simply ask it to calculate the cube root of 12 out to 10 significant figures. If the answer is almost immediate, you are talking to a computer.)

There are those who claim that todays chatbots have passed the Turing test. Most famously, a computer program known as Eugene Goostman purportedly passed. One posted subtitle claiming this reads, Eugene Goostman fools 33% of interrogators into thinking it is human, in what is seen as a milestone in artificial intelligence.

But those making a test often cheat to tweak the outcome. Thats what Goostmans programmers did. Goostman supposedly was a Ukrainian, which meant English was not his first language, so his poor grammar could be excused. He was supposedly a teenager, so any lack of depth of understanding could be chalked up to his nave intellect. Likewise, his tendency to be silly and deflect questions could be considered typical teenage behavior. In short, Goostmans software character was gamed to pass the Turing test.

Here are a couple of examples of an exchange between Goostman and his questioner. Goostman is first asked a question proposed in the first line of the lyrics of the song Close to You,popularized by theCarpenters:

Questioner: Why do birds suddenly appear?

Goostman: Just because 2 plus 2 is 5! By the way, whats your occupation? I mean could you tell me about your work?

We see here an often-used cheat in chatbots. If an answer is not known, the chatbot responds with an unrelated question to derail the conversations direction.

Heres another exchange with Eugene Goostman:

Questioner: It is ok, I get sick of sick people. How is your stomach feeling today? Is it upset maybe?

Goostman: I think you cant smile at all. I bet you work in a funeral agency.

As you can see, Goostmans answers here are elusively non-responsive.

Selmer Bringsjord correctly notes the Turing test is gamed by programmers. Gamed here is a nice word for being an elusive cheat. As Bringsjord writes, Though progress toward Turings dream is being made, its coming only on the strength ofclever but shallow trickery.

When gaming the system, chatbots can deflect detection by answering questions with other questions, giving evasive answers, or admitting ignorance. They display general intellectual shallowness as regards creativity and depth of understanding.

Goostman answered questions with questions like, By the way, whats your occupation? He also tried to change topics with conversational whiplash responses like I bet you work in a funeral agency. These are examples of the clever but shallow trickery Bringsjord criticized.

What, then, do Turing tests prove? Only that clever programmers can trick gullible or uninitiated people into believing theyre interacting with a human. Mistaking something for human does not make it human. Programming to shallowly mimic thought is not the same thing as thinking. Rambling randomness (such as the change-of-topic questions Goostman spit out) does not display creativity.

I propose to consider the question, Can machines think? Turing said. Ironically, Turing not only failed in his attempt to show that machines can be conversationally creative, but also developed computer science that shows humans are non-computable.

Read the original here:

Can Artificial Intelligence Be Creative? - Discovery Institute

Posted in Artificial Intelligence | Comments Off on Can Artificial Intelligence Be Creative? – Discovery Institute

ALEIA, LAUM and Omexom Launch the AUTEND Project to Accelerate Nuclear Power Plant Inspections With Artificial Intelligence – GlobeNewswire

Posted: at 9:52 pm

PARIS, June 30, 2022 (GLOBE NEWSWIRE) -- As the rate and number of inspections on nuclear sites are rapidly increasing, the AUTEND project aims to facilitate and accelerate the work of field analysts, with AI automatic identification of the inspected areas.

The project is currently focusing on Non-Destructive Testing (NDT), which is an inspection process for nuclear infrastructures using eddy current or ultrasonic. In fact, the algorithm developed by AUTEND will identify areas to focus the work of analysts.

The application of AI for these inspections will thus increase the capacity of the analysis and maintain the reliability of the interpretation of the results. Overall, the detection of these zones will reduce the time required for their analyses, and therefore help to respect the restart schedule of the nuclear units. In the long term, AI will contribute significantly to the theoretical reliability of the examinations, specially through a progressive construction of an evolving database.

The AUTEND project is built on the ALEIA platform and sustained by adapted datasets (in quality and quantity) and anonymized test sets. The hosting is secured on a sovereign cloud to guarantee full control of the information processing by the users.

The project is led by three leading partners:

After the first phase of construction and validation of the datasets in 2022, the AUTEND project enables the integration and testing phase of the algorithm planned for the end of 2022 and the beginning of 2023. The generalization of the experimentation is scheduled for the second half of 2023, before an application to other markets and sectors (aeronautical, oil and gas, or railway) in early 2024.

Jean-Franois HERR, Omexom NDT E&S company manager, says, "The acquisition rates are increasing thanks to robotization, as well as the complexity of the signals to be analyzed. But the time allocated for the analysis remains the same (from a few days to a few weeks during the nuclear unit shutdown). Faced with this increase in the flow of data to be analyzed, it is essential that our analysts focus on the few inspected areas where their expertise is required. To achieve this goal, the use of AI is an obvious choice."

Antoine COURET, Founder and President of ALEIA, says, "As the availability of the nuclear fleet becomes a major issue of our sovereignty, the selection of ALEIA in this project is a new sign of the legitimacy of our solution to develop a sovereign AI platform and corresponding to critical business needs. The ALEIA platform should enable teams to the successful industrialization of AI and achieve sustainable gains in productivity and time in their missions."

Rachid EL GUERJOUMA and Charfeddine Mechri, project managers for the LAUM, say, "The LAUM develops research activities in non-destructive acoustic testing of materials and complex structures with applications in the fields of transportation, energy, civil engineering ... Alongside the other partners, the LAUM brings to this project its expertise in sensors and instrumentation, signal processing, and DATA. What is more, its particular interest in artificial intelligence (AI) tools that are developed in the laboratory and AI skills should strengthen this collaboration."

CONTACTS

OMEXOM NDT E&S: Laurent Charpiot -laurent.charpiot@omexom.com - +33 6 75 39 16 11

ALEIA: Jacques Orjubin -jorjubin@angie.fr - +33 6 80 91 73 97

LAUM: Mechri Charfeddine -Charfeddine.Mechri@univ-lemans.fr - +33.6.61.53.74.88

This content was issued through the press release distribution service at Newswire.com.

Read this article:

ALEIA, LAUM and Omexom Launch the AUTEND Project to Accelerate Nuclear Power Plant Inspections With Artificial Intelligence - GlobeNewswire

Posted in Artificial Intelligence | Comments Off on ALEIA, LAUM and Omexom Launch the AUTEND Project to Accelerate Nuclear Power Plant Inspections With Artificial Intelligence – GlobeNewswire

Building explainability into the components of machine-learning models – MIT News

Posted: at 9:52 pm

Explanation methods that help users understand and trust machine-learning models often describe how much certain features used in the model contribute to its prediction. For example, if a model predicts a patients risk of developing cardiac disease, a physician might want to know how strongly the patients heart rate data influences that prediction.

But if those features are so complex or convoluted that the user cant understand them, does the explanation method do any good?

MIT researchers are striving to improve the interpretability of features so decision makers will be more comfortable using the outputs of machine-learning models. Drawing on years of field work, they developed a taxonomy to help developers craft features that will be easier for their target audience to understand.

We found that out in the real world, even though we were using state-of-the-art ways of explaining machine-learning models, there is still a lot of confusion stemming from the features, not from the model itself, says Alexandra Zytek, an electrical engineering and computer science PhD student and lead author of a paper introducing the taxonomy.

To build the taxonomy, the researchers defined properties that make features interpretable for five types of users, from artificial intelligence experts to the people affected by a machine-learning models prediction. They also offer instructions for how model creators can transform features into formats that will be easier for a layperson to comprehend.

They hope their work will inspire model builders to consider using interpretable features from the beginning of the development process, rather than trying to work backward and focus on explainability after the fact.

MIT co-authors include Dongyu Liu, a postdoc; visiting professor Laure Berti-quille, research director at IRD; and senior author Kalyan Veeramachaneni, principal research scientist in the Laboratory for Information and Decision Systems (LIDS) and leader of the Data to AI group. They are joined by Ignacio Arnaldo, a principal data scientist at Corelight. The research is published in the June edition of the Association for Computing Machinery Special Interest Group on Knowledge Discovery and Data Minings peer-reviewed Explorations Newsletter.

Real-world lessons

Features are input variables that are fed to machine-learning models; they are usually drawn from the columns in a dataset. Data scientists typically select and handcraft features for the model, and they mainly focus on ensuring features are developed to improve model accuracy, not on whether a decision-maker can understand them, Veeramachaneni explains.

For several years, he and his team have worked with decision makers to identify machine-learning usability challenges. These domain experts, most of whom lack machine-learning knowledge, often dont trust models because they dont understand the features that influence predictions.

For one project, they partnered with clinicians in a hospital ICU who used machine learning to predict the risk a patient will face complications after cardiac surgery. Some features were presented as aggregated values, like the trend of a patients heart rate over time. While features coded this way were model ready (the model could process the data), clinicians didnt understand how they were computed. They would rather see how these aggregated features relate to original values, so they could identify anomalies in a patients heart rate, Liu says.

By contrast, a group of learning scientists preferred features that were aggregated. Instead of having a feature like number of posts a student made on discussion forums they would rather have related features grouped together and labeled with terms they understood, like participation.

With interpretability, one size doesnt fit all. When you go from area to area, there are different needs. And interpretability itself has many levels, Veeramachaneni says.

The idea that one size doesnt fit all is key to the researchers taxonomy. They define properties that can make features more or less interpretable for different decision makers and outline which properties are likely most important to specific users.

For instance, machine-learning developers might focus on having features that are compatible with the model and predictive, meaning they are expected to improve the models performance.

On the other hand, decision makers with no machine-learning experience might be better served by features that are human-worded, meaning they are described in a way that is natural for users, and understandable, meaning they refer to real-world metrics users can reason about.

The taxonomy says, if you are making interpretable features, to what level are they interpretable? You may not need all levels, depending on the type of domain experts you are working with, Zytek says.

Putting interpretability first

The researchers also outline feature engineering techniques a developer can employ to make features more interpretable for a specific audience.

Feature engineering is a process in which data scientists transform data into a format machine-learning models can process, using techniques like aggregating data or normalizing values. Most models also cant process categorical data unless they are converted to a numerical code. These transformations are often nearly impossible for laypeople to unpack.

Creating interpretable features might involve undoing some of that encoding, Zytek says. For instance, a common feature engineering technique organizes spans of data so they all contain the same number of years. To make these features more interpretable, one could group age ranges using human terms, like infant, toddler, child, and teen. Or rather than using a transformed feature like average pulse rate, an interpretable feature might simply be the actual pulse rate data, Liu adds.

In a lot of domains, the tradeoff between interpretable features and model accuracy is actually very small. When we were working with child welfare screeners, for example, we retrained the model using only features that met our definitions for interpretability, and the performance decrease was almost negligible, Zytek says.

Building off this work, the researchers are developing a system that enables a model developer to handle complicated feature transformations in a more efficient manner, to create human-centered explanations for machine-learning models. This new system will also convert algorithms designed to explain model-ready datasets into formats that can be understood by decision makers.

View original post here:

Building explainability into the components of machine-learning models - MIT News

Posted in Artificial Intelligence | Comments Off on Building explainability into the components of machine-learning models – MIT News

Worldwide Artificial Intelligence (AI) in Drug Discovery Market to reach $ 4.0 billion by 2027 at a CAGR of 45.7% – ResearchAndMarkets.com – Business…

Posted: at 9:52 pm

DUBLIN--(BUSINESS WIRE)--The "Artificial Intelligence (AI) in Drug Discovery Market by Component (Software, Service), Technology (ML, DL), Application (Neurodegenerative Diseases, Immuno-Oncology, CVD), End User (Pharmaceutical & Biotechnology, CRO), Region - Global forecast to 2024" report has been added to ResearchAndMarkets.com's offering.

The Artificial intelligence/AI in drug discovery Market is projected to reach USD 4.0 billion by 2027 from USD 0.6 billion in 2022, at a CAGR of 45.7% during the forecast period. The growth of this market is primarily driven by factors such as the need to control drug discovery & development costs and reduce the overall time taken in this process, the rising adoption of cloud-based applications and services. On the other hand, the inadequate availability of skilled labor is key factor restraining the market growth at certain extent over the forecast period.

Services segment is estimated to hold the major share in 2022 and also expected to grow at the highest over the forecast period

On the basis of offering, the AI in drug discovery market is bifurcated into software and services. the services segment expected to account for the largest market share of the global AI in drug discovery services market in 2022, and expected to grow fastest CAGR during the forecast period. The advantages and benefits associated with these services and the strong demand for AI services among end users are the key factors for the growth of this segment.

Machine learning technology segment accounted for the largest share of the global AI in drug discovery market

On the basis of technology, the AI in drug discovery market is segmented into machine learning and other technologies. The machine learning segment accounted for the largest share of the global market in 2021 and expected to grow at the highest CAGR during the forecast period. High adoption of machine learning technology among CRO, pharmaceutical and biotechnology companies and capability of these technologies to extract insights from data sets, which helps accelerate the drug discovery process are some of the factors supporting the market growth of this segment.

Pharmaceutical & biotechnology companies segment expected to hold the largest share of the market in 2022

On the basis of end user, the AI in drug discovery market is divided into pharmaceutical & biotechnology companies, CROs, and research centers and academic & government institutes. In 2021, the pharmaceutical & biotechnology companies segment accounted for the largest share of the AI in drug discovery market. On the other hand, research centers and academic & government institutes are expected to witness the highest CAGR during the forecast period. The strong demand for AI-based tools in making the entire drug discovery process more time and cost-efficient is the key growth factor of pharmaceutical and biotechnology end-user segment.

Key Topics Covered:

1 Introduction

2 Research Methodology

3 Executive Summary

4 Premium Insights

4.1 Growing Need to Control Drug Discovery & Development Costs is a Key Factor Driving the Adoption of AI in Drug Discovery Solutions

4.2 Services Segment to Witness the Highest Growth During the Forecast Period

4.3 Deep Learning Segment Accounted for the Largest Market Share in 2021

4.4 North America is the Fastest-Growing Regional Market for AI in Drug Discovery

5 Market Overview

5.1 Introduction

5.2 Market Dynamics

5.2.1 Market Drivers

5.2.1.1 Growing Number of Cross-Industry Collaborations and Partnerships

5.2.1.2 Growing Need to Control Drug Discovery & Development Costs and Reduce Time Involved in Drug Development

5.2.1.3 Patent Expiry of Several Drugs

5.2.2 Market Restraints

5.2.2.1 Shortage of AI Workforce and Ambiguous Regulatory Guidelines for Medical Software

5.2.3 Market Opportunities

5.2.3.1 Growing Biotechnology Industry

5.2.3.2 Emerging Markets

5.2.3.3 Focus on Developing Human-Aware AI Systems

5.2.3.4 Growth in the Drugs and Biologics Market Despite the COVID-19 Pandemic

5.2.4 Market Challenges

5.2.4.1 Limited Availability of Data Sets

5.3 Value Chain Analysis

5.4 Porter's Five Forces Analysiss

5.5 Ecosystem

5.6 Technology Analysis

5.7 Pricing Analysis

5.8 Business Models

5.9 Regulations

5.10 Conferences and Webinars

5.11 Case Study Analysis

6 Artificial Intelligence in Drug Discovery Market, by Offering

7 Artificial Intelligence in Drug Discovery Market, by Technology

8 Artificial Intelligence in Drug Discovery Market, by Application

9 Artificial Intelligence in Drug Discovery Market, by End-user

10 Artificial Intelligence in Drug Discovery Market, by Region

11 Competitive Landscape

Companies Mentioned

For more information about this report visit https://www.researchandmarkets.com/r/q5pvns

More here:

Worldwide Artificial Intelligence (AI) in Drug Discovery Market to reach $ 4.0 billion by 2027 at a CAGR of 45.7% - ResearchAndMarkets.com - Business...

Posted in Artificial Intelligence | Comments Off on Worldwide Artificial Intelligence (AI) in Drug Discovery Market to reach $ 4.0 billion by 2027 at a CAGR of 45.7% – ResearchAndMarkets.com – Business…

VistaPath Raises $4M to Modernize Pathology Labs Using Computer Vision and Artificial Intelligence – PR Newswire

Posted: at 9:52 pm

CAMBRIDGE, Mass., June 30, 2022 /PRNewswire/ -- VistaPath, the leading provider of artificial intelligence (AI)-based, data-driven pathology processing platforms, today announced that it has secured $4 million in seed funding led by Moxxie Ventures with participation from NextGen Venture Partners and First Star Ventures. With this latest round, VistaPath will further advance its mission to modernize pathology labs, delivering faster, more accurate diagnoses that lead to optimal patient care.

"We're excited to be working with investors who share our desire to impact the lives and clinical outcomes of patients. This funding will support full-scale development and delivery of our innovative products, as well as the expansion of our operational and technical capabilitiesallowing us to better serve the clinical and life science markets," says Timothy Spong, CEO of VistaPath.

VistaPath's Sentinel is a first-of-its-kind pathology processing platform designed to seamlessly deliver a range of solutions for critical lab processes. The company's first application, released in 2021, is a tissue grossing platform that automates the process of receiving, assessing, and processing tissue samples. The platform uses a high-quality video system combined with AI to assess specimens and create a gross report 93% faster than human technicians with 43% more accuracy. Additional applications are slated to be released later this year.

"Pathology is the study of disease and connects every aspect of patient care. We believe that advances in computer vision and AI can bring great improvements to the pathology industry and ultimately lead to better outcomes for patients. We believe the team at VistaPath is building a best-in-class product for pathology labs and are proud to lead this investment round", says Alex Roetter, General Partner at Moxxie Ventures.

About VistaPath

VistaPath is modernizing pathology labs using computer vision and artificial intelligence. They provide clients with significant quality, workflow, and strategic benefits with the overall goal of delivering improved results for pathologists, clinicians, and patients. The Sentinel is the company's first product. Learn more at vistapathbio.com.

About Moxxie Ventures

Moxxie Ventures is an early stage venture firm focused on backing exceptional founders who make life and work better. Moxxie is based in San Francisco, CA and Boulder, CO. Learn more at moxxie.vc.

SOURCE VistaPath

Continue reading here:

VistaPath Raises $4M to Modernize Pathology Labs Using Computer Vision and Artificial Intelligence - PR Newswire

Posted in Artificial Intelligence | Comments Off on VistaPath Raises $4M to Modernize Pathology Labs Using Computer Vision and Artificial Intelligence – PR Newswire

ERTEC completes UAS TARSIS test campaign, an artificial intelligence project applied to flight safety sponsored by the European Defence Agency – sUAS…

Posted: at 9:51 pm

The ATLAS Experimental Flight Center in Spain has hosted the final phase of the SAFETERM (Safe Autonomous Flight Termination System) project, sponsored by the European Defense Agency and developed by technological companies GMV and AERTEC.

SAFETERM addresses the use of state-of-the-art artificial intelligence/machine learning technologies to increase the level of safety in specific emergency situations leading to flight termination.

AERTECs TARSIS 75 unmanned aerial system was used for the flight campaign, in which a prototype of the SAFETERM System was embarked for evaluation. These tests have attracted the interest of several dozen professionals and heads of agencies and organizations throughout Europe.

The ATLAS Experimental Flight Center in Jan, Spain has hosted the final phase of SAFETERM (Safe Autonomous Flight Termination System), a project sponsored by the European Defence Agency (EDA) and developed by technology companies GMV and AERTEC.

Unmanned aerial systems are in full expansion and development phase, with safety in all flight phases and its integration in the airspace being a priority issue. The objective of the SAFETERM project is to improve current medium-altitude, long-duration (MALE) RPAS flight termination systems and procedures by applying state-of-the-art artificial intelligence/machine learning technologies to increase the level of safety in specific emergency situations, in case of failure of both the autonomy and the ability to control the remote pilot.

The system aims to provide tools to enable aircraft to autonomously determine Alternative Flight Termination Areas (AFTA) where the risk to third parties can be minimized. In the event of a loss of communication with the aircraft and the subsequent identification of an emergency that prevents reaching planned Flight Termination Areas, the aircraft quickly identifies a safe area to land, avoiding buildings, roads or inhabited areas.

Final flight campaign of the UAS TARSIS 75The validation phase of the project has concluded with a flight campaign in a live operational environment at the ATLAS Experimental Flight Center, using AERTECs TARSIS 75 unmanned aerial system. The aircraft had an on-board prototype of the SAFETERM System for evaluation of its viability. To this end, several flights were made during three full days, in which the system behaved as expected during the course of the project.

During the tests, loss of communication and the subsequent emergency situations were simulated. Next, using the images obtained from the TARSIS sensor, the SAFETERM system autonomously identified possible safe landing areas, ultimately enabling TARSIS to make the guided flight to the safest landing area.

The fact that AERTEC is the firm in charge of Design Engineering and Integration of the TARSIS 75 has played a key role in the timely execution of this project, which required the development of new modules and integrating a new system (SAFETERM), first in a simulation environment and finally in our unmanned system, adds Juanjo Calvente, director of RPAS at AERTEC.

These tests have attracted the interest of several dozen professionals and heads of agencies and organizations from all over Europe, who have attended the call of the European Defense Agency (EDA) to present the results of SAFETERM.

About AERTECAERTEC is an international company specializing in aerospace technology. The company will celebrate its 25th anniversary in 2022 and develops its activity in the aerospace, defense, and airport industries.

AERTEC is a preferred supplier (Tier 1) of engineering services for AIRBUS in all its divisions: Commercial, Helicopters, Defense and Space, at the different AIRBUS sites globally. Its participation in the main global aeronautical programs stands out, such as the A400M, A330MRTT, A350XWB, A320, Beluga and the C295, among others.

The company designs embedded systems for aircraft, unmanned aerial platforms, and guidance solutions, both in the civil and military fields. It has light tactical UAS of its own design and technology, such as the TARSIS 75 and TARSIS 25, for observation and surveillance applications and also for support to military operations. Likewise, it designs, manufactures, and deploys systems for the digitization of work environments and the automation of functional tests, under the smart factory global concept.

As regards the airport sector, the company is positioned as the engineering firm with the strongest aeronautical focus, partaking in investment, planning and design studies, consultancy services for airport operations and terminal area and airfield process improvement. It has references in more than 160 airports distributed in more than 40 countries in five continents.

AERTECs staff consists of a team of more than 600 professionals, and has companies registered in Spain, the United Kingdom, Germany, France, Colombia, Peru, the United States, and the United Arab Emirates.

Go here to see the original:

ERTEC completes UAS TARSIS test campaign, an artificial intelligence project applied to flight safety sponsored by the European Defence Agency - sUAS...

Posted in Artificial Intelligence | Comments Off on ERTEC completes UAS TARSIS test campaign, an artificial intelligence project applied to flight safety sponsored by the European Defence Agency – sUAS…

Arm Cortex microprocessor for artificial intelligence (AI), imaging, and audio introduced by Microchip – Military & Aerospace Electronics

Posted: at 9:51 pm

CHANDLER, Ariz. Microchip Technology Inc. in Chandler, Ariz., is introducing the SAMA7G54 Arm Cortex A7-based microprocessor that runs as fast as 1 GHz for low-power stereo vision applications with accurate depth perception.

The SAMA7G54 includes a MIPI CSI-2 camera interface and a traditional parallel camera interface for high-performing yet low-power artificial intelligence (AI) solutions that can be deployed at the edge, where power consumption is at a premium.

AI solutions often require advanced imaging and audio capabilities which typically are found only on multi-core microprocessors that also consume much more power.

When coupled with Microchip's MCP16502 Power Management IC (PMIC), this microprocessor enables embedded designers to fine-tune their applications for best power consumption vs. performance, while also optimizing for low overall system cost.

Related: Embedded computing sensor and signal processing meets the SWaP test

The MCP16502 is supported by Microchip's mainline Linux distribution for the SAMA7G54, allowing for easy entry and exit from available low-power modes, as well as support for dynamic voltage and frequency scaling.

For audio applications, the device has audio features such as four I2S digital audio ports, an eight-microphone array interface, an S/PDIF transmitter and receiver, as well as a stereo four-channel audio sample rate converter. It has several microphone inputs for source localization for smart speaker or video conferencing systems.

The SAMA7G54 also integrates Arm TrustZone technology with secure boot, and secure key storage and cryptography with acceleration. The SAMA7G54-EK Evaluation Kit (CPN: EV21H18A) features connectors and expansion headers for easy customization and quick access to embedded features.

For more information contact Microchip online at http://www.microchipdirect.com.

Originally posted here:

Arm Cortex microprocessor for artificial intelligence (AI), imaging, and audio introduced by Microchip - Military & Aerospace Electronics

Posted in Artificial Intelligence | Comments Off on Arm Cortex microprocessor for artificial intelligence (AI), imaging, and audio introduced by Microchip – Military & Aerospace Electronics

Page 29«..1020..28293031..4050..»