Page 24«..1020..23242526..3040..»

Category Archives: Ai

Dominican Republic scores red in press freedom, according to AI tool – Dominican Today

Posted: July 29, 2022 at 5:02 pm

Santo Domingo, DRThe Dominican Republic is one of the countries marked in red as a model of a territory in which press freedom rights are violated, according to the weekly report on the state of press freedom in the Americas presented by the Inter-American Press Association (IAPA).

According to the report, in the National Congress, the law of free access to information is circumvented; an example of this is that when this media requests information in the Senate or the Chamber of Deputies, the data is scarce, thus restricting the constitutional rights and the laws of access to public information. Meanwhile, the color marked denotes the country where this freedom cannot be exercised.

Another country that was placed with a similar tone is Ecuador, due to its poor state for exercising freedom of expression, but ended the week with a light orange color due to the uproar over the New Communication Law that seeks to implement prior censorship and promote self-censorship.

In the same order, Brazil was again in red and orange due to the Brazilian justice system sentencing the three suspects for the murder in June of the British journalist Dom Phillips and the expert in indigenous affairs Bruno Pereira.

Meanwhile, despite showing lighter tones than last week, Mexico continues to suffer from attacks on its journalists. Last week, Colombia and Costa Rica remained with yellow tones showing an improvement.

In contrast, Paraguay was the country with the best climate for exercising freedom of expression this week.

This report is published on the data collected by SIP Bot, a real-time monitoring tool based on Artificial Intelligence (AI).

Read the original post:

Dominican Republic scores red in press freedom, according to AI tool - Dominican Today

Posted in Ai | Comments Off on Dominican Republic scores red in press freedom, according to AI tool – Dominican Today

Someone Asked an AI to Show the "Last Selfie Ever Taken" and Um – Futurism

Posted: at 5:02 pm

Is heaven filled with... ghoulish grim reapers?Spooky Selfie

If you've ever wondered how a selfie at the end of the world, or possibly some sort of hellish purgatory, might come out... well, frankly, neither have we. But somebody wanted to know, and at the behest of an AI image generator,a picture of such a thing hath been revealed.

Behold: a series of ghoul-filled "selfies" that feature skeletal, grim reaper-like figures, one of which even captures itself mugging in the mirror via smartphone.

The TikTok video of the requested imagery was posted to an account called what else @robotoverloards, which has a bio touting "daily disturbing AI generated images" as its mission. Happy summer!

The video's creator didn't say which text-to-image AI system was responsible for the image generation. One commenter insisted that they were made with Midjourney, a self-described "small self-funded team focused on design, human infrastructure, and AI," however @robotoverlords included references to both Midjourney and DALLE-E 2, the Elon Musk-founded OpenAI's newly revamped platform, in the hashtags.

Oddly, though the undead figures are indeed quite ghoulish, each one is set against heavenly backgrounds. One holds a handful of flowers, while all of the figures are seen in front of fluffy, sunshine-filled clouds.

"Interesting that it gave a bouquet to the first person," wrote an inquisitive TikTokker.

Maybe angels have a different look than we all thought? Perhaps hell is a cloud-filled, sun-soaked nightmare? Or, more likely, the the AI image-maker responsible provided us with a portrait of the afterlife that took multiple human descriptions of what life after death looks like, and gave us a AI-generated mashup of the two.

Either way, it was nice to see some positivity in the comments. As one user remarked: "I'm okay with this."

More on AI image generators: Ai Generates Painting of Actual "Burger King" And There's a Terrifying Hunger in His Eye

View original post here:

Someone Asked an AI to Show the "Last Selfie Ever Taken" and Um - Futurism

Posted in Ai | Comments Off on Someone Asked an AI to Show the "Last Selfie Ever Taken" and Um – Futurism

How eBay is ramping up AI use in ecommerce behind the scenes – VentureBeat

Posted: at 5:02 pm

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

Low-code and no-code artificial intelligence (AI) tools and immersive 3D visualization are heralding the next age of ecommerce.

eBay is taking a lead in this area with increased investment in AI-led experiences, including 3D product renderings to enhance shopping experiences and automated listing capabilities to simplify seller duties.

As one of the worlds largest selling platforms, eBay has leveraged AI for some time now but it has typically been done behind the scenes for recommendation systems, fraud detection and predictions of customer intent, explained Stephanie Moyerman, former senior director of risk and trust science at eBay. She recently transitioned into a new role with Instagram as its director of data science wellness.

What we want to do is integrate [AI] as part of the natural buying and selling experiences and flows, she told viewers during a live stream at this weeks Transform 2022 event. We want to give AI tools to our sellers so that they can enable different, customized experiences for their buyers without having any impedance there.

Most notably, eBay is offering 3D product rendering that gives buyers the ability to cruise through a listing and create a natural browsing experience as if they were in a physical store, said Moyerman. With low-code and no-code tools, sellers can scan items with their phones; images are then uploaded to the eBay cloud and converted to a 3D asset.

They dont need professional equipment of any kind, said Nitzan Mekel-Bobrov, chief AI officer at eBay. All of this can be done in a matter of minutes and theres zero manual intervention.

This capability has been launched in eBays sneaker category, with further rollouts to come.

Meanwhile, AI is also being used to enable high-speed and automated listing capabilities. Sellers can snap photos of items and automatically populate listings, ultimately reducing or altogether eliminating the need for manual input.

This capability has been launched initially in the trading card category.

As Mekel-Bobrov explained, these next-generation ecommerce tools are enabled via the use of computer vision, image recognition, convolutional neural networks and fine grain image analysis.

Mekel-Bobrov pointed out that eBay sellers are widely diverse ranging from professional brands and large businesses, to mom-and-pops, to occasional sellers, rummagers and garage-sellers.

AI and low-code and no-code tools provide new abilities and open up new opportunities across this landscape, while also addressing enormous variations when it comes to technical skill sets, he said.

We are a two-sided marketplace, said Mekel-Bobrov, saying that eBay must equally weigh the needs and wants of both their seller network and their buyers. Our entire reason for existing is to bring sellers and buyers together in that ecommerce context.

There isnt a company developing and employing AI that doesnt face challenges, Mekel-Bobrov conceded.

AI is different from other software applications in a number of ways, he said. One of the most important is that AI learns over time.

This requires continuous feedback loops, with information coming, going and being stored so that models can be retrained and enhanced.

Particularly, getting AI infrastructure to a scale that they want and in real-time, requires tremendous amounts of infrastructure, both hardware and software, said Mekel-Bobrov.

And that will grow exponentially with the further democratization of AI.

It comes down to tackling one area at a time rolling out new capabilities slowly, for instance, by product categories and sub-categories, or by customer segments.

There are cultural challenges, as well, both internally and externally.

Many sellers are not used to such advanced tools; it is imperative to help them take it to the next level as the world is evolving, as ecommerce is evolving, said Mekel-Bobrov.

With a full-scale metaverse on the horizon, a new consumer paradigm is emerging and it is not unlike the initial advent of AI, said Mekel-Bobrov.

The parallel, he said, is years of talking about it and people not knowing how its completely going to play out and then suddenly its here but its not here in a way that people necessarily expected.

Going forward, eBays efforts in AI and low-code and no-code will become more tactical, more tangible, he said. This will include doubling down on AI-led visual experiences and further incorporating it into visual understanding and content understanding. Similarly, eBay will continue investing in tools such as more advanced 3D and augmented reality (AR) and developing cross-platform compatibility.

Meanwhile, the widespread democratization of AI will require more education and investment across the board, said Mekel-Bobrov. He expects the eventual emergence of common standards, best practices and governance. This will require partnership and representation across companies to ensure that the same standards are built and maintained, regardless of implementation, he said.

Particularly when it comes to the metaverse, ecommerce companies need to work together and take an active role to be sure it shapes up in a way that protects and serves our customers as best as possible, said Mekel-Bobrov.

Ultimately, he said, I love talking about the future because you can dream big.

Watch the full-length conversation from Transform 2022.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Go here to read the rest:

How eBay is ramping up AI use in ecommerce behind the scenes - VentureBeat

Posted in Ai | Comments Off on How eBay is ramping up AI use in ecommerce behind the scenes – VentureBeat

Google AI Open-Sourced a New ML Tool for Conceptual and Subjective Queries over Images – InfoQ.com

Posted: at 5:02 pm

Google AI open-sourced mood board search, a new ML-powered tool for subjective or conceptual queries over images.

Mood board search, helps users to define conceptual and subjective queries like peaceful, andbeautiful, over images. Advances in using deep learning in computer vision enabled engineers and researchers to provide different functionalities such as similar image search, object detection, tagging, etc. One of the main challenges in this area is how to define and query images with conceptual intent. Mood board search helps people train and personalize the deep learning model in a way that they see the world. The following snapshot shows how an artist sees the world by categorizing images into different artistic concepts.

personal classification of the images into abstract and subjective concepts

In mood board search, researchers used pre-trained computer vision models like GoogLeNet and MobileNet, and a machine learning approach called Concept Activation Vectors (CAVs).

CAV is a technique to measure how a trained model is sensitive to the concept presented by the user. The following picture shows how CAV or tested CAV is working.

Getting the tested CAV score which quantifies the sensitivity of the classifier to the concept

As an example, a deep learning model trained to classify images as zebra or not zebra. We want to quantify how important the stripe concept is for the classifier. By simply running TCAV and getting the score we can answer the question. CAV is used as one of the general techniques for the explainability of deep learning models. As mentioned in the blog post :

In Mood Board Search, we use CAVs to find a model's sensitivity to a mood board created by the user. In other words, each mood board creates a CAV a direction in embedding space and the tool searches an image dataset, surfacing images that are the closest match to the CAV. However, the tool takes it one step further, by segmenting each image in the dataset in 15 different ways, to uncover as many relevant compositions as possible.

Working with the mood board search GUI is straightforward. As it is explained in the blog post :

To get started, simply drag and drop a small number of images that represent the idea you want to convey. Mood Board Search returns the best results when the images share a consistent visual quality, so results are more likely to be relevant with mood boards that share visual similarities in color, pattern, texture, or composition.

Some of the design studios like morrama got excited about this tool and tweeted :

Amazing work, can't wait to be able to play with it.

Google AI open-sourced the code for researchers and developers for more contributions in this area. Also, there is an experimental app by design invention studio Nord Projects, which uses mood board search.

More here:

Google AI Open-Sourced a New ML Tool for Conceptual and Subjective Queries over Images - InfoQ.com

Posted in Ai | Comments Off on Google AI Open-Sourced a New ML Tool for Conceptual and Subjective Queries over Images – InfoQ.com

Giftpack AI to Exhibit Corporate Gifting SaaS at the World’s Biggest Tech Show – PR Newswire

Posted: at 5:02 pm

A year since launching, the AI powered platform recorded a 95% acceleration of the gifting process.

LAS VEGAS, July 28, 2022 /PRNewswire/ -- With a modern technique of mixing artificial intelligence and human emotions, Giftpack AI will be exhibiting their artificial intelligence powered business gifting software-as-a-service platform at the CES 2022.

Giftpack AI helps businesses of all sizes to send individually tailored gifts in seconds to thousands of clients and employees around the world in a fraction of time as compared to traditional gifting processes.

Since its launch in summer 2020, the company has delivered over 40,000 gifts to more than 70 countries and boasts a 92% gift satisfaction rate according to the feedback of gift receivers.

Archer Chiang, CEO & Founder of Giftpack, said "COVID-19 reminded businesses how important it is to show their employees that the company cares about them. Gifts are an obvious way to bolster motivation and even support mental health while working remotely."

What makes Giftpack stand out from the crowd

From ideation to delivery, Giftpack's speed of the whole employee gifting process is accelerated by 95%, compared to traditional hand-picked gifting process with the integration of AI. This reduces the workload and communication efforts to coordinate employee gifting and allows businesses to focus on the more important aspects.

"Businesses can place more time and focus into management and operations while we do the heavy lifting. Moreover, employee appreciation is a driving factor in retaining their workforce - we are helping companies reduce the turnover rate to combat The Great Resignation due to limited staffing and overworking," he added.

The Great Resignation has brought on an estimated 24 million employee resignation from April to September this year, in the United States alone. Businesses were forced to downsize due to the pandemic adding more workload and burnout unto the remaining employees. Giftpack aims to boost employee morale and aid in employee retention.

Streamlined process to save time and cost

This is achieved through enabling AI to import receiver list, generate ideas, invitation, packaging, shipping, process tracking, all the way to data analytics and customer service to streamline the process and enhance the gifting experience.

Chiang also states, "Giftpack not only saves time on gifting with high-quality gifts, our platform is culturally sensitive, able to learn the customs of communities around the globe."

Culturally diverse platform

With team members hailing from all parts of the world and different cultures, the business is able to build a cultural database and teach AI the dos and don'ts of gifting as well as cultural understanding.

Data analytics

Giftpack's user-friendly dashboard provides users advanced statistics in an easily consumable analytics interface. Users can also set metrics and track progress to monitor how gifts are impacting their return on investment.

The company believes in bridging AI technology with a human touch to make any gifting occasion special and unforgettable.

Giftpack AI will be showcasing its technology at the Taiwan Tech Arena (TTA) Pavilion at Eureka Park, Venetian Expo 1F (Former Sands expo), booth no: 61423.

About Giftpack AI

Giftpack AI is an AI-powered business gifting SaaS platform that streamlines the gifting process. Helping customers create meaningful connections, retain employees, and curate VIP clients, the brand helps businesses send individually tailored gifts 95% faster than traditional methods. Giftpack AI's audience consists of HR managers, marketers, and executive assistants, with some clients in the brokering, business development and project management sectors.

SOURCE Giftpack AI

See the article here:

Giftpack AI to Exhibit Corporate Gifting SaaS at the World's Biggest Tech Show - PR Newswire

Posted in Ai | Comments Off on Giftpack AI to Exhibit Corporate Gifting SaaS at the World’s Biggest Tech Show – PR Newswire

AI tools help scientists to find hominins used fire 800k years ago – WION

Posted: at 5:02 pm

Working hypothesis on human evolution says that the ability to use fire in a controlled manner was a major step. The fire gave protection from cold and even led to man cooking his food before consumption. This further gave it protection against microbes present in the food. Hence, studying evidence of use of fire forms an important aspect of archaeological investigation.

Now a new technique has been used to determine that hominins, a form of man before it evolved into its modern form used fire even 800,000 years ago.

The new technique involves use of Artificial Intelligence. Such a method, if used more, will give archeology a more data driven basis.

Experts involved in this study are from Weizmann Institute of Science, Hebrew University of Jerusalem and University of Toronto.

The deep learning models that prevailed had a specific architecture that outperformed the others and successfully gave us the confidence we needed to further use this tool in an archaeological context having no visual signs of fire use. said one of the researchers as quoted by SciTechDaily.

It was not only a demonstration of exploration and being rewarded in terms of the knowledge gained, said another researcher, but of the potential that lies in combining different disciplines: Ido has a background in quantum chemistry, Zane is a scientific archaeologist, and Liora and Michael are prehistorians. By working together, we have learned from each other. For me, its a demonstration of how scientific research across the humanities and science should work.

You can now write for wionews.com and be a part of the community. Share your stories and opinions with us here.

WATCH WION LIVE HERE

View post:

AI tools help scientists to find hominins used fire 800k years ago - WION

Posted in Ai | Comments Off on AI tools help scientists to find hominins used fire 800k years ago – WION

USC’s Biggest Wins in Computing and AI – USC Viterbi | School of Engineering – USC Viterbi School of Engineering

Posted: at 5:02 pm

USC has been an animating force for computing research since the late 1960s.

With the advent of the USC Information Sciences Institute (ISI) in 1972 and the Department of Computer Science in 1976 (born out of the Ming Hsieh Department of Electrical and Computer Engineering), USC has played a propulsive role in everything from the internet to the Oculus Rift to recent Nobel Prizes.

Here are seven of those victories reimagined as cinemagraphs still photographs animated by subtle yet remarkable movements.

Cinemagraph: Birth of .Com

1. The Birth of the .com (1983)

While working at ISI, Paul Mockapetris and Jon Postel pioneered the Domain Name System, which introduced the .com, .edu, .gov and .org internet naming standards.

As Wired noted on the 25th anniversary, Without the Domain Name System, its doubtful the internet could have grown and flourished as it has.

The DNS works like a phone book for the internet, automatically translating text names, which are easy for humans to understand and remember, to numerical addresses that computers need. For example, imagine trying to remember an IP address like 192.0.2.118 instead of simply usc.edu.

In a 2009 interview with NPR, Mockapetris said he believed the first domain name he ever created was isi.edu for his employer, the (USC) Information Sciences Institute. That domain name is still in use today.

Grace Park, B.S. and M.S. 22 in chemical engineering, re-creates Len Adlemans famous experiment.

2. The Invention of DNA Computing (1994)

In a drop of water, a computation took place.

In 1994, Professor Leonard Adleman, who coined the term computer virus, invented DNA computing, which involves performing computations using biological molecules rather than traditional silicon chips.

Adleman who received the 2002 Turing Award, often called the Nobel Prize of computer science saw that a computer could be something other than a laptop or machine using electrical impulses. After visiting a USC biology lab in 1993, he recognized that the 0s and 1s of conventional computers could be replaced with the four DNA bases: A, C, G and T. As he later wrote, a liquid computer can exist in which interacting molecules perform computations.

As the New York Times noted in 1997: Currently the worlds most powerful supercomputer sprawls across nearly 150 square meters at the U.S. governments Sandia National Laboratories in New Mexico. But a DNA computer has the potential to perform the same breakneck-speed computations in a single drop of water.

Weve shown by these computations that biological molecules can be used for distinctly non-biological purposes, Adleman said in 2002. They are miraculous little machines. They store energy and information, they cut, paste and copy.

Professor Maja Matari with Blossom, a cuddly, robot companion to help people with anxiety and depression practice breathing exercises and mindfulness.

3. USC Interaction Lab Pioneers Socially Assistive Robotics (2005)

Named No. 5 by Business Insider as one of the 25 Most Powerful Women Engineers in Tech, Maja Matari leads the USC Interaction Lab, pioneering the field of socially assistive robotics (SAR).

As defined by Matari and her then-graduate researcher David Feil-Seifer 17 years ago, socially assistive robotics was envisioned as the intersection of assistive robotics and social robotics, a new field that focuses on providing social support for helping people overcome challenges in health, wellness, education and training.

Socially assistive robots have been developed for a broad range of user communities, including infants with movement delays, children with autism, stroke patients, people with dementia and Alzheimers disease, and otherwise healthy elderly people.

We want these robots to make the user happier, more capable and better able to help themselves, said Matari, the Chan Soon-Shiong Chair and Distinguished Professor of Computer Science, Neuroscience and Pediatrics at USC. We also want them to help teachers and therapists, not remove their purpose.

The field has inspired investments from federal funding agencies and technology startups. The assistive robotics market is estimated to reach $25.16 billion by 2028.

Is the ball red or blue? Is the cat alive or dead? Professor Daniel Lidar, one of the worlds top quantum influencers, demonstrates the idea of superposition.

4. First Operational Quantum Computing System in Academia (2011)

Before Google or NASA got into the game, there was the USC-Lockheed Martin Quantum Computing Center (QCC).

Led by Daniel Lidar, holder of the Viterbi Professorship in Engineering, and ISIs Robert F. Lucas (now retired), the center launched in 2011. With the worlds first commercial adiabatic quantum processor, the D-Wave One, USC is the only university in the world to host and operate a commercial quantum computing system.

As USC News noted in 2018, quantum computing is the ultimate disruptive technologyit has the potential to create the best possible investment portfolio, dissolve urban traffic jams and bring drugs to market faster. It can optimize batteries for electric cars, predictions for weather and models for climate change.Quantum computing can do this, and much more, because it can crunch massive data and variables and do it quickly with advantage over classical computers as problems get bigger.

Recently, QCC upgraded to D-Waves Advantage system, with more than 5,000 qubits, an order of magnitude larger than any other quantum computer. The upgrades will enable QCC to host a new Advantage generation of quantum annealers from D-Wave and will be the first Leap quantum cloud system in the United States. Today, in addition to Professor Lidar one of the worlds top quantum computing influencers QCC is led by Research Assistant Professor Federico Spedalieri, as operations director, and Research Associate Professor Stephen Crago, associate director of ISI.

David Traum, a leader at the USC Institute for Creative Technologies (ICT), converses with Pinchas Gutter, a Holocaust survivor, as part of the New Dimensions in Testimony.

5. USC ICT Enables Talking with the Pastin the Future (2015)

New Dimensions in Testimony, a collaboration between the USC Shoah Foundation and the USC Institute for Creative Technologies (ICT), in partnership with Conscience Display, is an initiative to record and display testimony in a way that will continue the dialogue between Holocaust survivors and learners far into the future.

The project uses ICTs Light Stage technology to record interviews using multiple high-end cameras for high-fidelity playback. The ICT Dialogue Groups natural language technology allows fluent, open-ended conversation with the recordings. The result is a compelling and emotional interactive experience that enables viewers to ask questions and hear responses in real-time, lifelike conversation even after the survivors have passed away.

New Dimensions in Testimony debuted in the Illinois Holocaust Museum & Education Center in 2015. Since then, more than 50 survivors and other witnesses have been recorded and presented in dozens of museums around the United States and the world. It remains a powerful application of AI and graphics to preserve the stories and lived experiences of culturally and historically significant figures.

Eric Rice and Bistra Dilkina are co-directors of the Center for AI in Society (CAIS), a remarkable collaboration between the USC Dworak-Peck School of Social Work and the USC Viterbi School of Engineering.

6. Among the First AI for Good Centers in Higher Education (2016)

Launched in 2016, the Center for AI in Society (CAIS) became one of the pioneering AI for Good centers in the U.S., uniting USC Viterbi and the USC Suzanne Dworak-Peck School of Social Work.

In the past, CAIS used AI to prevent the spread of HIV/AIDS among homeless youth. In fact, a pilot study demonstrated a 40% increase in homeless youth seeking HIV/AIDS testing due to an AI-assisted intervention. In 2019, the technology was also used as part of the largest global deployment of predictive AI to thwart poachers and protect endangered animals.

Today, CAIS fuses AI, social work and engineering in unique ways, such as working with the Los Angeles Homeless Service Authority to address homelessness; battling opioid addiction; mitigating disasters like heat waves, earthquakes and floods; and aiding the mental health of veterans.

CAIS is led by co-directors Eric Rice, a USC Dworak-Peck professor of social work, and Bistra Dilkina, a USC Viterbi associate professor of computer science and the Dr. Allen and Charlotte Ginsburg Early Career Chair.

Pedro Szekely, Mayank Kejriwal and Craig Knoblock of the USC Information Sciences Institute (ISI) are at the vanguard of using computer science to fight human trafficking.

7. AI That Fights Modern Slavery (2017)

Beginning in 2017, a team of researchers at ISI led by Pedro Szekely, Mayank Kejriwal and Craig Knoblock created software called DIG that helps investigators scour the internet to identify possible sex traffickers and begin the process of capturing, charging and convicting them.

Law enforcement agencies across the country, including in New York City, have used DIG as well as other software programs spawned by Memex, a Defense Advanced Research Projects Agency (DARPA)-funded program aimed at developing internet search tools to help investigators thwart sex trafficking, among other illegal activities. The specialized software has triggered more than 300 investigations and helped secure 18 felony sex-trafficking convictions, according to Wade Shen, program manager in DARPAs Information Innovation Office and Memex program leader. It has also helped free several victims.

In 2015, Manhattan District Attorney Cyrus R. Vance Jr. announced that DIG was being used in every human trafficking case brought by the DAs office. With technology like Memex, he said, we are better able to serve trafficking victims and build strong cases against their traffickers.

This is the most rewarding project Ive ever worked on, said Szekely. Its really made a difference.

Published on July 28th, 2022

Last updated on July 28th, 2022

See original here:

USC's Biggest Wins in Computing and AI - USC Viterbi | School of Engineering - USC Viterbi School of Engineering

Posted in Ai | Comments Off on USC’s Biggest Wins in Computing and AI – USC Viterbi | School of Engineering – USC Viterbi School of Engineering

HistoIndex Stain-free AI-DP Reveals Treatment-induced Zonal Regression of Fibrosis Colocalized with Reduction in Steatosis & Hepatocyte Ballooning…

Posted: at 5:02 pm

SINGAPORE, July 29, 2022 /PRNewswire/ -- HistoIndex, a global leading artificial intelligence digital pathology (AI-DP) provider, announced that its AI-based qFibrosis, qSteatosis and qBallooning algorithms delivered sensitivity and granularity on a greater scale, as compared with the current conventional histological scoring system, in quantifying treatment-associated changes in Nonalcoholic Steatohepatitis (NASH)[1]. AI-DP with Second Harmonic Generation/Two-photon Excitation Fluorescence (SHG/TPEF) was employed in the FLIGHT-FXR study (NCT02855164) to gather a deeper understanding of fibrosis dynamics in the overall liver biopsy, in different zones within the liver lobules and its colocalized spatial relation to changes in steatosis and hepatocyte ballooning in patients with non-cirrhotic NASH who were treated with Tropifexor (TXR). Findings from the 48-week study were recently published in the online issue of the Journal of Hepatology.

Results of the FLIGHT-FXR study showed that the AI-based qFibrosis highlighted treatment-associated regression of overall liver fibrosis, as well as marked regression in perisinusoidal fibrosis in patients with either significant fibrosis (F2) or advanced fibrosis (F3) at baseline. A concomitant zonal quantification of the fibrosis and steatosis parameters using qFibrosis and qSteatosis revealed that patients with greater qSteatosis reduction also have the greatest reduction in perisinusoidal fibrosis. This shows that hepatic lipid load reduction, as a result of anti-metabolic treatment, drives fibrosis regression initially in the perisinusoidal regions.

Says Professor Nikolai Naoumov MD, PhD, Advisor in Digital Pathology, Drug Development and Research in Liver Diseases, and principal author of the study, "SHG AI-based platform is able to quantify precisely fibrosis changes while avoiding variations caused by staining, and provides novel insights in treatment-induced fibrosis regression, which are not captured by current staging systems. In addition, quantifying changes in steatosis and hepatocyte ballooning concomitantly in a colocalized spatial environment demonstrated the direct association between improvement in the metabolic activity of NAFLD with fibrosis regression. This SHG approach has demonstrated great potential for clinical trial investigators to understand treatment efficacy and disease mechanisms of NASH."

Adding to this is Professor Arun Sanyal, Professor of the Internal Medicine Department, Division of Gastroenterology, Hepatology and Nutrition at Virginia Commonwealth University, and co-author of the study states "This work demonstrates that SHG imaging and automated digital analytics can not only assist pathologists evaluate changes in liver histology in a more granular and accurate way but also provides novel insights on the evolution of fibrosis and changes in fibrosis upon initiation of therapy. These have important implications for drug development and the way treatment responses are assessed in phase 2B and 3 trials".

About HistoIndex

Founded in Singapore, HistoIndex is a leading MedTech/Healthcare company that specializes in its proprietary integrated stain-free AI digital pathology platform. Enabled by Second Harmonic Generation (SHG) and Two-Photon Excitation (TPE) along with automated imaging analysis algorithms, the integrated platform accurately quantifies histological features and fine measurements that are critical for the evaluation of therapeutic efficacy in clinical trials. The stain-free AI platform is currently involved in multiple FDA clinical trials for Nonalcoholic Steatohepatitis (NASH). In addition, it has benefitted more than 150 research and academic institutes, CROs and biopharma companies around the world in drug discovery and development efforts for fibrotic diseases and cancers.

References

SOURCE HistoIndex

Read the original post:

HistoIndex Stain-free AI-DP Reveals Treatment-induced Zonal Regression of Fibrosis Colocalized with Reduction in Steatosis & Hepatocyte Ballooning...

Posted in Ai | Comments Off on HistoIndex Stain-free AI-DP Reveals Treatment-induced Zonal Regression of Fibrosis Colocalized with Reduction in Steatosis & Hepatocyte Ballooning…

Adding More Data Isn’t the Only Way to Improve AI – HBR.org Daily

Posted: July 14, 2022 at 10:28 pm

Sometimes an AI-based system cant decipher the physical world with a sufficient degree of accuracy and the option of just adding more data isnt possible. In many of these cases, however, this deficiency can be addressed by using four techniques to help AI better understand the physical world: synergize AI with scientific laws, augment data with expert human insights, employ devices to explain how AI makes decisions, and use other models to predict behavior.

Artificial intelligence (AI) gets its intelligence by analyzing a given dataset and detecting patterns. It has no concept of the world beyond this dataset, which creates a variety of dangers.

One changed pixel could confuse the AI system to think a horse is a frog or, even scarier, err on a medical diagnosis or a machine operation. Its exclusive reliance on the data sets also introduces a serious security vulnerability: Malicious agents can spoof the AI algorithm by introducing minor, nearly undetectable changes in the data. Finally, the AI system does not know what it does not know, and it can make incorrect predictions with a high degree of confidence.

Adding more data cannot always surmount these problems because practical business and technical constraints always limit the amount of data. And processing large datasets requires ever-larger AI models that are outpacing available hardware and growing AIs carbon footprint unsustainably.

We have identified an alternative remedy: connecting data-driven AI with other scientific or human inputs about the applications domain. It is based on our two decades of experience at the University of Californias Center for Information Technology Research in the Interest of Society and the Banatao Institute (CITRIS) in working with academics and business executives to implement AI for many applications. There are four ways it can be done.

We can combine available data with relevant laws of physics, chemistry, and biology to leverage the strengths and overcome the weaknesses of each. One example is an ongoing project with Komatsu where we are exploring how to use AI to guide the autonomous, efficient operation of heavy excavation equipment. AI does well in running the machine but not so well in understanding the surrounding environment.

Therefore, to teach the AI algorithm the differences between soft soil, gravel, and hard rock in the terrain being excavated we used physics-based models that describe size, distribution, hardness, and water content of the particles. Equipped with this knowledge, the AI-driven machine can apply just the right amount of force to grab a bucketful of earth efficiently and safely. Similarly, we use AI to operate a robotic surgical arm, and then combine it with a physics-based model that predicts how skin and tissue will deform under pressure. In both cases, whether earth or tissue, combining data-driven and physics-based models makes the operation safer, faster, and more efficient.

When available data is limited, human intuition can be used to augment and improve the intelligence of AI. For example, in the field of advanced manufacturing, it is extremely expensive and challenging to develop novel process recipes required to build a new product. Data about novel processes is limited or simply does not exist and generating it would require lots of trial-and-error attempts that may take many months and cost millions of dollars.

A more effective way is to have humans and AI augment and support each other, according to Lam Research, a leading maker of semiconductor equipment that supplies state-of-the-art microelectronics manufacturing facilities. Starting from scratch, highly experienced engineers usually do well in arriving at an approximately correct recipe, while AI is continuously collecting data and learning from those efforts. Once the recipe is in the ballpark, the engineers can enlist AI to support them in fine-tuning it to a precise optimum. Such techniques may provide up to an order of magnitude improvement in efficiency.

In the science fiction novel The Hitchhikers Guide to the Galaxy the smartest computer gave 42 as the answer to life, the universe, and everything, prompting many a reader to chuckle. Yet, it is no laughing matter for businesses, because AI often operates as a black box that makes confident recommendations without explaining why. If the way that AI makes decisions is not explainable, it is usually not actionable. A doctor shouldnt make a medical diagnosis and a utility engineer shouldnt shut off a critical piece of infrastructure based on an AI recommendation that they cannot explain intuitively.

For example, we are working on a smart infrastructure application where sensors monitor the integrity of thousands of wind turbines. The AI algorithm analyzing this data may throw a red flag when it detects a pattern of increased temperature or vibration intensity. But what does this mean? Is it just a hot day or a stray gust of high wind? Or does a utility crew need to be rushed out (an expensive operation) immediately?

Our solution: add fiber-optic sensors to measure the actual physical strain in the turbine material. Then, when utility engineers cross-check the AI red flag with the actual strain in the turbine blade, they can determine the true urgency of the problem and choose the safest corrective action.

Data-driven AI works well within the boundaries of the dataset it has processed, analyzing behavior between actual observations, or interpolation. However, to extrapolate that is, to predict behavior in operating modes outside the available data we have to incorporate knowledge of the domain in question. Indeed, this is often the approach taken by many applications that employ digital twins to mirror the operation of a complex system such as a jet engine. A digital twin is a dynamic model that mirrors the exact state of an actual system at all times and uses sensors to keep the model updated in real time.

We used this effectively in our project with Siemens Technology on digital twins for smart buildings. We employed data-driven AI to model and control the normal operation of the building, and to diagnose problems. Then, we judiciously mixed in physics-grounded equations such as basic thermodynamic equations tracking the heat flow to the air conditioning system and living spaces to predict the buildings behavior in a novel setting. Using this approach, we could predict the buildings behavior with different heating or cooling equipment or while operating under unusual weather conditions. This enabled us to try alternate operational modes without endangering critical infrastructure or its users. We found this approach also works well in other applications such as smart manufacturing, construction, and autonomous vehicles ranging from automobiles to spacecraft.

As humans, we understand the world around us by using our senses in tandem. Given a steaming cup, we determine instantly that it is tea from its color, smell, and taste. Connoisseurs may go a step further and identify it as a Darjeeling first-flush tea. AI algorithms are trained and are limited by a particular dataset and do not have access to all the senses like we do. An AI algorithm trained only on images of cups of coffee may see this same steaming cup of tea and conclude it is coffee. Worse, it may do so with a high degree of confidence!

Any available dataset will always be incomplete, and processing ever-larger datasets is often not practical or environmentally sustainable. Instead, adding other forms of understanding of the domain in question can help make data-driven AI safer and more efficient and enable it to address challenges that it otherwise could not.

Read more:

Adding More Data Isn't the Only Way to Improve AI - HBR.org Daily

Posted in Ai | Comments Off on Adding More Data Isn’t the Only Way to Improve AI – HBR.org Daily

Researcher uses ‘fuzzy’ AI algorithms to aid people with memory loss – University of Toronto

Posted: at 10:28 pm

A new computer algorithm developed by the University of TorontosParham Aarabican store and recall information strategically just like our brains.

The associate professor in the Edward S. Rogers Sr. department of electrical and computer engineering, in the Faculty of Applied Science & Engineering,has also created an experimental tool that leverages the new algorithm to help people with memory loss.

Most people think of AI as more robot than human, says Aarabi, whose framework is explored in a paper being presentedthis week at the IEEE Engineering in Medicine and Biology Society Conferencein Glasgow.I think that needs to change.

In the past, computers have relied on their users to tell them exactly what information to store. But with the rise of artificial intelligence (AI) techniques such as deep learning and neural nets, there has been a move toward fuzzier approaches.

Ten years ago, computing was all about absolutes, says Aarabi. CPUs processed and stored memory data in an exact way to make binary decisions. There was no ambiguity.

Nowwe want our computers to make approximate conclusions and guess percentages. We want an image processor to tell us, for example, that theres a 10 per cent chance a picture contains a car and a 40 per cent chance that it contains a pedestrian.

Aarabi has extended this same fuzzy approach to storing and retrieving information by copying several properties that help humans determine what to remember and, just as critically, what to forget.

Studies have shown that we tend to prioritize more recent events over less recent ones. We also emphasize memories that are more important to usand we compress long narratives to their essentials.

For example, today I remember that I saw my daughter off to school, I made a promise that Id pay someone backand I promised that Id read a research paper, says Aarabi. But I dont remember every single second of what I experienced.

The capacity to overlook certain information could supercharge existing models of machine learning.

Today, machine learning algorithms trawl through millions of database entries, looking for patterns that will help them correctly associate a given input with a given output. Only after countless iterations does the algorithm eventually become accurate enough to deal with new problems that it hasnt already seen.

If bio-inspired artificial memory enables these algorithms to give prominence to the most relevant data, they could potentially arrive at meaningful results much more quickly.

The approach could also support tools that process natural language to help people with memory loss keep track of key information.

Aarabi and his team have set up such a tool using a simple email-based interface. It reminds participants of important information based on algorithmic priority and a relevant index of keywords.

Ultimately, its geared to people with memory loss, Aarabi says. It helps them remember things in a way thats very human, very soft, without overwhelming them. Most task management aids are too complicated and not useful in these circumstances.

The demo is free and available for anyone to play with; simply send an email tomem@roya.vcfor instructions.

Ive been using it myself, says Aarabi. The goal is to put the demo in peoples hands whether theyre dealing with significant memory degradation or just everyday pressures and see what feedback we get. The next steps would be to build partnerships in health care to test in a more comprehensive way.

These days, AI applications are increasingly found in many human-centred fields, says ProfessorDeepa Kundur, chair of the department of electrical and computer engineering. Professor Aarabi, by researching ways to better integrate AI with these softer areas, is looking to ensure that the potential of AI is fully realized in our society.

Aarabi says that this algorithm is just the beginning.

Biologically inspired memory may very well take AI a step closer to human-level capabilities.

More here:

Researcher uses 'fuzzy' AI algorithms to aid people with memory loss - University of Toronto

Posted in Ai | Comments Off on Researcher uses ‘fuzzy’ AI algorithms to aid people with memory loss – University of Toronto

Page 24«..1020..23242526..3040..»