Page 87«..1020..86878889..100110..»

Category Archives: Ai

How The Overlap Between Artificial Intelligence And Stem Cell Research Is Producing Exciting Results – Forbes

Posted: November 21, 2021 at 10:17 pm

Passage Of California Stem Cell Proposition Boosts Research

For the last decade and more, Stem Cell research and regenerative medicine have been the rave of the healthcare industry, a delicate area that has seen steady advancements over the last few years.

The promise of regenerative medicine is simple but profound that one day medical experts will be able to diagnose a problem, remove some of our body cells called stem cells and use them to grow a cure for our ailment. Using our body cells will create a highly personalized therapy attuned to our genes and systems.

The terminologies often used in this field of medicine can get a bit fuzzy for the uninitiated, so in this article, I have relied heavily on the insights of Christian Drapeau, a neurophysiologist and stem cell expert.

Drapeau was one of the first voices who discovered and began to speak about stem cells being the bodys repair system in the early 2000s. Since then, he has gone on to discover the first stem cell mobilizer, and his studies and research delivered the proof of concept that the AFA (Aphanizomenon flos-aquae) extract was capable of enhancing repair from muscle injury.

Christian Drapeau is also the founder of Kalyagen, astem cell research-based company, and the manufacturers of Stemregen. This stem cell mobilizer combines some of the most effective stem cell mobilizers Drapeau has discovered to create an effective treatment for varying diseases.

How exactly do stem cell-based treatments work? And how is it delivering on its promise of boosting our abilities to regenerate or self-heal?

Drapeau explains the concept for us;

Stem cells are mother cells or blank cells produced by the bone marrow. As they are released from the bone marrow stem cells can travel to any organ and tissue of the body, where they can transform into cells of that tissue.Stem cells constitute the repair system of the body.

The discovery of this function has led scientists on a long journey to discover how to use stem cells to cure diseases, which are essentially caused by cellular loss. Diseases like Diabetes and age-related degenerative diseases are all associated with the loss of a type of cell or cellular function.

However, what Drapeaus research has unearthed over the last few decades is that there are naturally occurring substances that show a demonstrated ability to induce the release of stem cells from the bone marrow. These stem cells then enter the bloodstream, from where they can travel to sites of cell deficiency or injury in the body to aid healing and regeneration. This process is referred to as Endogenous Stem Cell Mobilization (ESCM).

Stemregen is our most potent creation so far, explains Drapeau, and it has shown excellent results with the treatment of problems in the endocrine system, muscles, kidneys, respiratory systems, and even with issues of erectile dysfunction.

Despite the stunning advancements that have been made so far, a concern that both Drapeau and I share is how this innovation can be merged with another exciting innovation; AI.

Is it even a possibility? Drapeau, an AI enthusiast, explains that AI has already been a life-saver in stem cell research and has even more potential.

On closer observation, there are a few areas in which AI has greatly benefited stem cell research and regenerative medicine.

One obstacle that scientists have consistently faced with delivering the full promise of regenerative medicine is the complexity of the available data.Cells are so different from each other that scientists can struggle with predicting what the cells will do in any given therapeutic scenario. Scientists are faced with millions of ways that medical therapy could go wrong.

Most AI experts believe that in almost any field, AI can provide a solution whenever there is a problem with data analysis and predictive analysis.

Carl Simon, a biologist at the National Institute of Standards and Technology (NIST) and Nicholas Schaub recentlytested this hypothesiswhen they applied Deep Neural Networks (DNN), an AI program to the data they had collected in their experiments on eye cells. Their research revolved around causes and solutions for age-related eye degeneration. The results were stunning; the AI made only one incorrect prediction about cell changes out of 36 predictions it was asked to make.

Their program learned how to predict cell function in different scenarios and settings from annotated images of cells. It soon could rapidly analyze images of the lab-grown eye tissues to classify the tissues as good or bad. This discovery has raised optimism in the stem cell research space.

Drapeau explains why this is so exciting;

When we talk about stem cells in general, we say stem cells as if they were all one thing, but there are many different types of stem cells.For example, hair follicle and dental pulp stem cells contain neuronal markers and can easily transform into neurons to repair the brain. Furthermore, the tissue undergoing repair must signal to attract stem cells and must secrete compounds to stimulate stem cell function. A complex analysis of the tissue that needs repair and the conditions of that tissue using AI, in any specific individual, will help select the right type of stem cells and the best cells in that stem cell population, along with the accompanying treatment to optimize stem cell-based tissue repair.

Christian Drapeau

Ina study published in Februaryof this year inStem Cells, researchers from Tokyo Medical and Dental University (TMDU) reported that their AI system, called DeepACT, had successfully identified healthy, productive skin stem cells with the same accuracy that a human could. This discovery further strengthens Drapeaus argument on the potentials of AI in this field.

This experiment owes its success to AIs machine learning capabilities, but it is expected that Deep Learning can be beneficially introduced into regenerative medicine.There are many futuristic projections for these possibilities, but many of them are not as far-fetched as they may first seem.

Researchers believe that AI can help fast-track the translation of regenerative medicine into clinical practice; the technology can be used to predict cell behavior in different environments. Therefore, hypothetically, it can be used to simulate the human environment. This means that researchers can gain in-depth information more rapidly.

Perhaps the most daring expectation is the possibility of using AI to pioneer the 3D printing of organs. In a world where organ shortage is a harsh reality, this would certainly come in handy. AI algorithms can be utilized to identify the best materials for artificial organs, understand the anatomic challenges during treatment, and design the organ.

Can stem cells actually be used along with other biological materials to grow functional 3D-printed organs? If this is possible, then pacemakers will soon give way to 3D-printed hearts. A 3D-printedheart valvehas already become a reality in India, making this even more of an imminent possibility.

While all of these possibilities excite Drapeau, he is confident that AIs capabilities with data analysis and prediction, which is already largely in use, would go down as its most beneficial contribution to stem cell research;

It was already shown that stem cells laid on the connective tissue of the heart, the soft skeleton of the heart, can lead the entire formation of a new heart. Stem cells have this enormous regenerative potential. AI can take this to another level by helping establish the conditions in which this type of regeneration can be orchestrated inside the body.But we have to be grateful for what we already have, over the last 20 years, I have studied endogenous stem cell mobilization and today the fact that we have such amazing results with Stemregen is testament that regenerative medicine is already a success.

As AI continues to scale over industry boundaries, we can only sit back and hope it delivers on its full potential promise. Who knows? Perhaps AI really can change the world.

The rest is here:

How The Overlap Between Artificial Intelligence And Stem Cell Research Is Producing Exciting Results - Forbes

Posted in Ai | Comments Off on How The Overlap Between Artificial Intelligence And Stem Cell Research Is Producing Exciting Results – Forbes

From oximeters to AI, where bias in medical devices may lurk – The Guardian

Posted: at 10:17 pm

The UK health secretary, Sajid Javid, has announced a review into systemic racism and gender bias in medical devices in response to concerns it could contribute to poorer outcomes for women and people of colour.

Writing in the Sunday Times, Javid said: It is easy to look at a machine and assume that everyones getting the same experience. But technologies are created and developed by people, and so bias, however inadvertent, can be an issue here too.

We take a look at some of the gadgets used in healthcare where concerns over racial bias have been raised.

Oximeters estimate the amount of oxygen in a persons blood, and are a crucial tool in determining which Covid patients may need hospital care not least because some can have dangerously low levels of oxygen without realising.

Concerns have been raised, however, that the devices work less well for patients with darker skin. NHS England and the Medicines and Healthcare products Regulatory Agency (MHRA) say pulse oximeters can overestimate the amount of oxygen in the blood.

Javid told the Guardian last month that the devices were designed for caucasians. As a result, you were less likely to end up on oxygen if you were black or brown, because the reading was just wrong, he said.

Experts believe the inaccuracies could be one of the reasons why death rates have been higher among minority ethnic people, although other factors may also play a role, such as working in jobs that have greater exposure to others.

Medical-grade respirators are crucial to help keep healthcare workers safe from Covid because they offer protection to the wearer against both large and small particles that others exhale.

In order to offer the greatest protection, however, filtering face piece (FFP) masks must fit properly and research has shown they do not fit as well on people from some ethnic backgrounds.

Adequate viral protection can only be provided by respirators that properly fit the wearers facial characteristics. Initial fit pass rates [the rate at which they pass a test on how well they fit] vary between 40% and 90% and are especially low in female and in Asian healthcare workers, one review published in 2020 notes.

Another published in September found that studies on the fit of such PPE largely focused on Caucasian or single ethnic populations. BAME people remain under-represented, limiting comparisons between ethnic groups, it said.

Spirometers measure lung capacity, but experts have raised concerns that there are racial biases in the interpretation of data gathered from such gadgets.

Writing in the journal Science, Dr Achuta Kadambi, an electrical engineer and computer scientist at the University of California, Los Angeles said Black or Asian people are assumed to have lower lung capacity than white people a belief he noted may be based on inaccuracies in earlier studies. As a result, correction factors are applied to the interpretation of spirometer data a situation that can affect the order in which patients are treated.

For example, before correction a Black persons lung capacity might be measured to be lower than the lung capacity of a white person Kadambi writes.

After correction to a smaller baseline lung capacity, treatment plans would prioritise the white person, because it is expected that a Black person should have lower lung capacity, and so their capacity must be much lower than that of a white person before their reduction is considered a priority.

Another area Kadambi said may be affected by racial bias is remote plethysmography, a technology in which pulse rates are measured by looking at changes in skin colour captured by video. Kadambi said such visual cues may be biased by subsurface melanin content in other words, skin colour.

AI is increasingly being developed for applications in healthcare, including to aid professionals in diagnosing conditions. There are concerns, however, that biases in data used to develop such systems means they risk being less accurate for people of colour.

Such concerns were recently raised in relation to AI systems for diagnosing skin cancers. Researchers revealed that few freely available image databases that could be used to develop such AI are labelled with ethnicity or skin type. Of those that did have such information recorded, only a handful were of people recorded as having dark brown or black skin.

It is an issue Javid has acknowledged. Announcing new funding last month for AI projects to tackle racial inequalities in healthcare, such as the detection of diabetic retinopathy, he noted that one area of focus would be the development of standards to make sure datasets used in developing AI systems are diverse and inclusive.

If we only train our AI using mostly data from white patients it cannot help our population as a whole. We need to make sure the data we collect is representative of our nation, he said.

View original post here:

From oximeters to AI, where bias in medical devices may lurk - The Guardian

Posted in Ai | Comments Off on From oximeters to AI, where bias in medical devices may lurk – The Guardian

Google’s Australia investment could be a big boost for the nation’s A.I. scene – CNBC

Posted: at 10:17 pm

People gather for picnics beside the Harbour Bridge in the suburb of Kirribilli on September 19, 2021 in Sydney, Australia. Covid-19 restrictions have eased for people in NSW who are fully vaccinated.

James D. Morgan | Getty Images News | Getty Images

San Francisco, London, Montreal, Paris and New York have all developed a reputation for being hotbeds of artificial intelligence research over the years.

Sydney and Melbourne, Australia's two biggest cities, have not. But that could be about to change.

Google announced Monday that it plans to set up a new Google Research Australia lab in Sydney as part of a 1 billion Australian dollar ($729 million) investment in Australia. The lab will research everything from AI to quantum computing.

The move has been welcomed by AI researchers in Australia who told CNBC that there have been limited opportunities for AI gurus in the country over the years.

Stephen Merity, an Australian AI researcher who now lives in the San Francisco Bay Area, told CNBC that Google should have launched Google Research Australia years ago, adding that there are many well-known luminaries in the field from Australia.

"They almost all had to leave Australia to get opportunities," he said. "Those who stayed were under-utilized, including those at Google Sydney."

Google works on a handful of projects in Sydney but the scope of the search giant's research has been relatively limited compared to the likes of Mountain View, where Google is headquartered, London, Zurich and Tokyo.

Jonathan Kummerfeld, a senior lecturer at the University of Sydney, told CNBC that none of the big technology companies had labs in Australia until very recently.

Amazon opened a lab in Adelaide, while Oracle and IBM have set up AI labs in Melbourne. The likes of Facebook and Twitter have offices in Australia but they don't have significant teams of AI researchers there.

"The more the industrial research ecosystem in Australia grows, the more other companies will consider opening offices," said Kummerfeld.

"University departments have been robustly growing over the last decade as enrollments in computer science have gone up, and more faculty means more postdocs and more PhD students, but that's a relatively small number of permanent jobs in the scheme of things," he added.

Google's investment will also be used to expand Google's cloud computing infrastructure in Australia and fund partnerships with local universities and other organizations.

The investment package dubbed the "Digital Future Initiative" is expected to create 6,000 jobs and support an additional 28,000. It was launched at an event attended by Australian Prime Minister Scott Morrison where Google officially opened a new Australia headquarters in Sydney.

"The announcement by Google is a $1 billion vote of confidence in Australia's digital economy strategy," Morrison said.

The investment comes after U.S. tech giants were criticized by Australian lawmakers earlier this year for failing to pay local news publishers for content that gets shared on their platforms. In response, Google Australia's Managing Director Mel Silva threatened to block Google Search in the country.

Australia then became the first country in the world to make it a legal requirement that large technology firms including Google and Facebook, now Meta, pay for news content on their platforms.

See more here:

Google's Australia investment could be a big boost for the nation's A.I. scene - CNBC

Posted in Ai | Comments Off on Google’s Australia investment could be a big boost for the nation’s A.I. scene – CNBC

Solving entertainments globalization problem with AI and ML – TechCrunch

Posted: at 10:17 pm

Teresa PhillipsContributor

The recent controversy surrounding the mistranslations found in the Netflix hit Squid Game and other films highlights technologys challenges when releasing content that bridges languages and cultures internationally.

Every year across the global media and entertainment industry, tens of thousands of movies and TV episodes exhibited on hundreds of streaming platforms are released with the hope of finding an audience among 7.2 billion people living in nearly 200 countries. No audience is fluent in the roughly 7,000 recognized languages. If the goal is to release the content internationally, subtitles and audio dubs must be prepared for global distribution.

Known in the industry as localization, creating subs and dubs has, for decades, been a human-centered process, where someone with a thorough understanding of another language sits in a room, reads a transcript of the screen dialogue, watches the original language content (if available) and translates it into an audio dub script. It is not uncommon for this step to take several weeks per language from start to finish.

Once the translations are complete, the script is then performed by voice actors who make every effort to match the action and lip movements as closely as possible. Audio dubs follow the final cut dialogue, and then subtitles are generated from each audio dub. Any compromise made in the language translation may, then, be subjected to further compromise in the production of subtitles. Its easy to see where mistranslations or changes in a story can occur.

The most conscientious localization process does include some level of cultural awareness because some words, actions or contexts are not universally translatable. For this purpose, the director of the 2019 Oscar-winning film Parasite, Bong Joon-ho, sent detailed notes to his translation team before they began work. Bong and others have pointed out that limitations of time, available screen space for subtitles, and the need for cultural understanding further complicate the process. Still, when done well, they contribute to higher levels of enjoyment of the film.

The exponential growth of distribution platforms and the increasing and continuous flow of fresh content are pushing those involved in the localization process to seek new ways to speed production and increase translation accuracy. Artificial intelligence (AI) and machine learning (ML) are highly anticipated answers to this problem, but neither has reached the point of replacing the human localization component. Directors of titles such as Squid Game or Parasite are not yet ready to make that leap. Heres why.

First, literal translation is incapable of catching 100% of the storys linguistic, cultural or contextual nuance included in the script, inflection or action. AI companies themselves admit to these limitations, commonly referring to machine-based translations as more like dictionaries than translators, and remind us that computers are only capable of doing what we teach them while stating they lack understanding.

For example, the English title of the first episode of Squid Game is Red Light, Green Light. This refers to the name of the childrens game played in the first episode. The original Korean title is (Mugunghwa Kkoch-I Pideon Nal), which directly translates as The Day the Mugunghwa Bloomed, which has nothing to do with the game theyre playing.

In Korean culture, the title symbolizes new beginnings, which is the games protagonists promise to the winner. Red Light, Green Light is related to the episode, but it misses the broader cultural reference of a promised fresh start for people down on their luck a significant theme of the series. Some may believe that naming the episode after the game played because the cultural metaphor of the original title is unknown to the translators may not be a big deal, but it is.

How can we expect to train machines to recognize these differences and apply them autonomously when humans dont make the connection and apply them themselves?

Its one thing for a computer to translate Korean into English. It is another altogether for it to have knowledge about relationship differences like those in Squid Game between immigrants and natives, strangers and family members, employees and bosses and how those relationships impact the story. Programming cultural understanding and emotional recognition into AI is challenging enough, especially if those emotions are displayed without words, such as a look on someones face. Even then, it is hard to predict emotional facial response that may change with culture.

AI is still a work in progress as it relates to explainability, interpretability and algorithmic bias. The idea that machines will self-train themselves is far-fetched given where the industry stands concerning executing AI/ML. For a content-heavy, creative industry like media and entertainment, context is everything; there is the content creators expression of context, and then there is the audiences perception of it.

Moreover, with respect to global distribution, context equals culture. A digital nirvana is achieved when a system can orchestrate and predict the audio, video and text in addition to the multiple layers of cultural nuance that are at play at any given frame, scene, theme and genre level. At the core, it all starts with good-quality training data essentially, taking a data-centric approach versus a model-centric one.

Recent reports indicate Facebook catches only 3% to 5% of problematic content on its platform. Even with millions of dollars available for development, programming AI to understand context and intent is very hard to do. Fully autonomous translation solutions are some ways off, but that doesnt mean AI/ML cannot reduce the workload today. It can.

Through analysis of millions of films and TV shows combined with the cultural knowledge of individuals from nearly 200 countries, a two-step human and AI/ML process can provide the detailed insights needed to identify content that any country or culture may find objectionable. In culturalization, this cultural roadmap is then used in the localization process to ensure story continuity, avoid cultural missteps and obtain global age ratings all of which reduce post-production time and costs without regulatory risk.

Audiences today have more content choices than ever before. Winning in the global marketplace means content creators have to pay more attention to their audience, not just at home but in international markets.

The fastest path to success for content creators and streaming platforms is working with companies that understand local audiences and what matters to them so their content is not lost in translation.

Continued here:

Solving entertainments globalization problem with AI and ML - TechCrunch

Posted in Ai | Comments Off on Solving entertainments globalization problem with AI and ML – TechCrunch

AI Is Here; 2 Stocks That Stand to Benefit – Yahoo Finance

Posted: at 10:17 pm

In todays world, digital tech rules. And that means, artificial intelligence is vital. AI powers our apps, it makes our devices smart, and it underpins our computing technology. For investors, this makes AI-related stocks a prime target.

So, how to get in? Are we looking for robots, for thinking machines that can walk and talk? Were not there yet, no matter what the science fiction movies show us on the big screen. But we do have a host of lower-key choices, applications of AI to the daily activities of our ordinary world. From paying our bills to parsing our data, theres very little we do that doesnt interact with an AI system, somewhere.

Bearing this in mind, weve used the TipRanks platform to pinpoint two stocks that are tied to AI but in quite different ways. Moreover, both tickers earn Moderate or Strong Buy consensus ratings from the analyst community, and boast considerable upside potential. Here are the details.

Opportunity Financial (OPFI)

The first stock well look at is Opportunity Financial, a fintech, or financial technology, company in the consumer credit sector. This company lives up to its name, providing credit access to the median US consumer. This target market is employed, has a bank account, and earns approximately $50,000 annually but lacks a strong credit score, and so has difficulty entering the traditional banking/credit system. OppFi fills that gap, using AI to quickly and accurately parse raw consumer data.

The company provides customers with easy access to credit applications, through a digital process via smartphone. The decision process is fair and transparent, and customer service is held to a high standard. OppFis services include loans, payroll-linked credit, and a same-day access credit card.

There is a strong market for OppFi credit access business. Approximately 58% of American consumers have less than $1,000 in savings, and some 115 million people are living paycheck to paycheck. These problems are exacerbated by lack of access to liquidity; 51% of consumers reported being denied a loan in the last 12 months, while 72% were denied a line of credit.

Story continues

OppFi uses AI tech to power its app, mitigating the inherent risks of providing credit to this target market.

OppFis numbers show up the need for its services in the US economy. The company reported a 25% year-over-year increase in net loan origination for 3Q21, to $165 million. Top line revenue came in at $92 million, for a 47% yoy gain. EPS was flat from Q2, at 21 cents per share. This was OppFis second quarterly report as a public company; the firm entered the stock markets last summer through a SPAC merger.

ThinkEquity analyst Ashok Kumar sees OppFi unconventional approach as a key point, writing: OppFi credit decisioning technology is differentiated and unique. The companys dataset includes 8 billion data points. The company has received over 11 million applications and facilitated over 1.8 million loans. The company is continuously improving the platform through AI machine learning to facilitate more access while maintaining loss rates even during periods of high growth."

"We are forecasting net revenues to grow from $219 million in fiscal 2020 to $318 million in fiscal year 2022. The company continues to make investments to support business growth, including developing new loan products, enhancing AI models, improving operating infrastructure, or acquiring complementary businesses and technologies," the analyst added.

All of this has been factored into Kumars Buy rating on OPFI. The analyst gives the stock a $12 price target, suggesting ~90% growth over the next 12 months. (To watch Kumars track record, click here)

Its clear that Wall Street agrees with Kumars views. The stock has 5 recent reviews, including 4 to Buy and 1 Hold. The shares are priced at $6.30 and their $10.20 average price target suggests a one-year upside potential of 62%. (See OPFI stock analysis on TipRanks)

GigCapital4 (GIG)

Well continue our look at AI with a SPAC. GigCapital4 was formed specifically to target a tech firm and it is currently moving to combine with Big Bear.ai. GigCapital4 filed earlier this month its proxy statement with the SEC confirming its intent to merge with BigBear.ai, and the company has a special stockholder meeting scheduled for December 3 to vote on the move.

Completion of the transaction will take BigBear.ai public on the New York Stock Exchange with the ticker BBAI. The move is expected to bring some $330 in new capital to the target company when it closes, and will create a combined entity with an estimated enterprise value of $1.57 billion.

As for BigBear.ai, GigCapital4s prospective partner, that company is an AI tech firm with roots in the national security and defense industry. The company works in data-driven decision making and advanced analytics, making complex raw data into the base for human decision making. BigBear.ai offers an end-to-end data analytics platform that integrates both AI and machine learning (ML), and is scalable and fits into existing technologies used by both public and private sector customers.

BigBear.ai reported revenue of $40.2 million in 3Q21, and is projected to reach $182 million in total revenue by the end of this year. The company added new contract awards totally $150 million during Q3, and by the end of the quarter had a total backlog of $485 million.

5-star analyst Michael Latimore, covering this coming SPAC combo for Northland Securities, writes of BigBear.ai: BigBear.ai has solidified a leading position in AI among government agencies and is expanding rapidly into the commercial sector. Core company expertise is improving data sources, thus producing better insights and predictions. This has helped the company achieve a 93% win rate.

AI is now maturing enough to be applicable to multiple use cases and industry verticals in our view. The overall AI software and services market is forecast to grow at a 40% CAGR and within the US government by 50% this year. AI is often core to digital transformation, which is accelerating, Latimore continued.

Based on all of the above factors, Latimore rates GigCapital4 shares a Buy and set a $13 price target. Apparently, the analyst believes share prices could surge ~30% over the next twelve months. (See GIG stock analysis on TipRanks)

SPACs dont always get a lot of analyst attention. Latimore is the only bull in the picture right now - with the stock displaying a Moderate Buy analyst consensus. (See GIG stock analysis on TipRanks)

To find good ideas for AI stocks trading at attractive valuations, visit TipRanks Best Stocks to Buy, a newly launched tool that unites all of TipRanks equity insights.

Disclaimer: The opinions expressed in this article are solely those of the featured analysts. The content is intended to be used for informational purposes only. It is very important to do your own analysis before making any investment.

Go here to read the rest:

AI Is Here; 2 Stocks That Stand to Benefit - Yahoo Finance

Posted in Ai | Comments Off on AI Is Here; 2 Stocks That Stand to Benefit – Yahoo Finance

Darktrace aims to expand into proactive security AI by end of year – VentureBeat

Posted: at 10:17 pm

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more

Darktrace plans to expand its AI-powered security offerings to include attack prevention by the end of 2021,the company told VentureBeat.

On Tuesday, executives from the company described plans for upcoming product updates that will expand the Darktrace portfolio to include proactive security AI capabilities, joining the companys detection and response technologies.

The upcoming launch of prevent capabilities will extend Darktrace into the offensive area for the first time ever, said Nicole Eagan, chief strategy officer and AI officer at Darktrace, while speaking at the virtual Gartner Security & Risk Management Summit Americas conference on Tuesday.

In a statement provided to VentureBeat, Eagan said that development of this breakthrough innovation known as our prevent capability is on track, and we expect this to be released to early adopters by the end of this calendar year.

Founded in 2013, the Cambridge, U.K.-based firm went public in April and now has a market capitalization of $4.25 billion.

While Darktrace is a pioneer in the realm of security AI with its self-learning technology for detecting and responding to cyber threats, the company is now part of a fast-growing field of companies that are turning to AI and machine learning to counter increasingly sophisticated cyber threats.

Startups getting major traction in the space include Securiti, Vectra AI, and Salt Security, while cybersecurity giants such as Fortinet, Palo Alto Networks, and Microsoft have invested heavily into AI-based security. Today, for instance, Palo Alto Networks unveiled a cloud security platform that taps ML and AI to enable many of its new capabilities, such as improved data loss prevention.

Alongside its growth, Darktrace has also demonstrated the potential for AI-powered security with responses to high-profile cyber incidents, such as an incident this summerat the Olympic Games in Tokyo.

There, Darktrace identified a malicious Raspberry Pi IoT device that an intruder had planted into the office of a national sporting body directly involved in the Olympics. The companys technology detected the device port scanning nearby devices, blocked the connections, and supplied human analysts with insights into the scanning activity so they could investigate further.

But even with outcomes like that, there is much more that Darktraces security AI technology can do, company executives said during the conference Tuesday.During all the time that you arent actually under attack, a customer could be using the Darktrace technology in order to prevent future attacks, Eagan said.

The companys self-learning AI has an immense amount of insights from a customers data, she said. We could use this data to help you move from a reactive state to a proactive, and even an adaptive, state.

Specifically, Darktrace is looking at capabilities that include attack path modeling, which in the past has typically been a human-centric capability, said Max Heinemeyer, director of threat hunting at Darktrace, during the conference session.

With the self-learning AI technology, Darktrace knows a customers digital estate inside and out, he said. The technology knows what type of data is being accessed, how its being accessed, what types of emails are being sent, what variety of internet-facing systems a customer has, and whether there is shadow IT in the environment, Heinemeyer said. This could provide customers with potential attack paths that they otherwise would never have been able to figure out, he said.

The Darktrace system could proactively tell a customer, This is your core crown jewel, based on what we see and its actually just two hops from this new [employee] to one of your IT administrators to compromise that, Heinemeyer said. And that could be one of thousands of possible attack pathways. So we can really have an impact in telling you where your risks lie, and where your most vulnerable paths are, without having to predefine everything and try to tell the system what your environment looks like. That situational awareness, that context, comes with the self-learning AI.

In this scenario, Darktrace would be able to then feed that knowledge back into the detection and response side of the product, wrapping a safety blanket around these critical assets, he said.

Other prevent capabilities in development at Darktrace include AI-powered red teaming to automatically test security controls, company executives said.

Eventually, the goal is for Darktraces expanded security AI offerings to form a continuous AI loop thats always improving your overall cyber posture, Eagan said.

The plan even further down the road is to bring AI-driven recovery capabilities after an attack, she said.

We feel that were very well positioned to be able to actually help in that recovery, Eagan said. Our vision is really to be able to help you do the cleanup very quickly bring the organization back to its normal state of business operations.

Ultimately, she said, Darktrace sees each of its AI systems reinforcing the other, minimizing any impacts of any breach or attack in real time, and allowing the AI to preemptively lower your risk.

Read the original:

Darktrace aims to expand into proactive security AI by end of year - VentureBeat

Posted in Ai | Comments Off on Darktrace aims to expand into proactive security AI by end of year – VentureBeat

First AI white paper calls for major measures and investment in artificial intelligence research – New Zealand Herald

Posted: at 10:17 pm

Business

21 Nov, 2021 11:39 PM3 minutes to read

White paper suggests need to focus AI strategy and resources on NZ economy's vital industries. Photo / 123RF

If New Zealand does not invest in artificial intelligence research, its AI capabilities will only be efficient software running in the cloud of large overseas companies, creating risk for the country's technology and data sovereignty independence.

This is a conclusion of the first white paper issued by New Zealand's Artificial Intelligence Researchers Association, which says our universities and research institutes have very strong AI research with "huge breadth and potential".

"It is imperative to create and invest in an AI ecosystem where industry and research organisations can work together more closely for the benefit of Aotearoa New Zealand," said the paper.

AI was profoundly changing how people live and work, and its cumulative impact was likely to be comparable to transformative technologies such as electricity or the internet.

"As a result it is imperative that we take a strategic approach to realising the potential benefits offered by AI and to protecting people against the potential risks," the paper said.

It was important to invest in AI which was imbued with characteristics and values important to New Zealand.

"Otherwise we risk being relegated to users of overseas technologies developed by countries with different values."

Following the framework designed by the World Economic Forum, the authors discuss the AI capabilities of New Zealand, and propose recommendations how to establish this country "as an exemplar of excellence and trust in AI worldwide".

"Our vision is that by 2030, Aotearoa New Zealand will have a community of cutting edge companies producing and exporting AI technologies, supported by a strong network of researchers involved in high-level fundamental and applied research ...

"The labour force will be highly qualified, and Aotearoa New Zealand will be at the forefront of equitable AI education for diverse stakeholders. The realisation of this vision requires investment as well as concerted effort."

In the area of scientific research, the paper proposes increasing funding for public AI research with the development of new research centres, hubs and programmes in basic and applied AI research.

Other recommendations include doubling the number of researchers, lecturers and associate and professors in AI by 2024 and by 2030 doubling that number again.

To develop AI talent, the paper recommends offering world-leading masters programmes in AI, onsite and online. Other recommendations in this area include doubling the number of PhDs and masters degrees by research students by 2025 and again by 2030, promoting industrial PhDs.

To industrialise AI technologies, the paper recommends creating programmes to encourage private sector adoption of AI, including investments in strategic sectors, including primary industry, climate change, health, and high-value manufacturing.

The paper said AI needed good infrastructure to be successful, and recommends an effective national data infrastructure with open data partnerships and datasets.

"It is important to consider the strategic implications of AI for Aotearoa New Zealand. If we do not invest in AI research and adoption, we lose the ability to compete effectively with dominant platform first domiciled in large markets such as the US, China and the EU."

New Zealand would risk losing its ability to tailor AI to its local needs, priorities and ethical standards.

The Artificial Intelligence Researchers Association (Aira is a not-for-profit representative body established to support the production and dissemination of artificial intelligence research.

Waikato, Auckland, Canterbury, Victoria, Massey and Lincoln university senior academics and Plant & Food Research were discussion partners in the paper.

Visit link:

First AI white paper calls for major measures and investment in artificial intelligence research - New Zealand Herald

Posted in Ai | Comments Off on First AI white paper calls for major measures and investment in artificial intelligence research – New Zealand Herald

AI can’t tell if you’re lying anyone who says otherwise is selling something – TNW

Posted: at 10:17 pm

Another day, another problematic AI study.Todays snake oil special comes via Tel Aviv University where a team of researchers have unveiled a so-called lie-detection system.

Lets be really clear right up front: AI cant do anything a person, given an equivalent amount of time to work on the problem, couldnt do themselves. And no human can tell if any given human is lying. Full stop.

The simple fact of the matter is that some of us can tell when some people are lying some of the time. Nobody can tell when anybody is lying all of the time.

The university makes the following claim via press release:

Researchers at Tel Aviv University detected 73% of the lies told by trial participants based on the contraction of their facial muscles achieving a higher rate of detection than any known method.

Thats a really weird statement. The idea that 73% accuracy at detecting lies is indicative of a particular paradigms success is arguable at best.

Base luck gives any system capable of choice a 50/50 chance. And, traditionally, thats about how well humans perform at guessing lies. Interestingly, they perform much better at guessing truths. Some studies claim humans achieve about the same accuracy at determining truth statements as the Tel Aviv teams lie-detection system does at determining truthfulness.

The Tel Aviv University teams paper even mentions that polygraphs arent admissible in courts because theyre unreliable. But they fail to point out that polygraph devices (which have been around since 1921) beat their own system in so-called accuracy polygraphs average about an 80% 90% accuracy-rate in studies.

But lets take a deeper look at the Tel Aviv teams study anyway. The team started with 48 participants, 35 of which were identified as female. Six participants were cut because of technical issues, two got dropped for never lying, and one participated in only 40 out of 80 trials when monetary incentives were not presented.

So, the data for this study was generated from two sources: a proprietary AI system and 39-40 human participants. Of those participants, an overwhelming majority were identified as female, and theres no mention of racial, cultural, or religious diversity.

Furthermore, the median age of participants was 23 and theres no way to determine if the team considered financial backgrounds, mental health, or any other concerns.

All we can tell is that a small group of people averaging 23-years in age, mostly female, paired off to participate in this study.

There was also compensation involved. Not only were they paid for their time, which is standard in the world of academia research, but they were also paid for successfully lying to humans.

Thats a red flag. Not because its unethical to pay for study data (it isnt). But because its adding unnecessary parameters in order to intentionally or ignorantly muddy up the study.

The researchers explain this by claiming it was part of the experiment to determine whether incentivization changed peoples ability to lie.

But, with such a tiny study sample, it seems ludicrous to cram the experiment full of needless parameters. Especially ones that are so half-baked they couldnt possibly be codified without solid background data.

How much impact does a financial incentive have on the efficacy of a truth-telling study? That sounds like something that needs its own large-scale study to determine.

The researchers paired off participants into liars and receivers. The liars put on headphones and listened for either the word tree or line and then were directed to either tell the truth or lie about which theyd heard. Their partners job was to guess if they were being lied to.

The twist here is that the researchers created their own electrode arrays and attached them to the liars faces and then developed an AI to interpret the outputs. The researchers operated under an initial assumption that twitches in our facial muscles are a window to the ground-truth.

This assumption is purely theoretical and, frankly, ridiculous. Stroke victims exist. Bells Palsy exists. Neurodiverse communication exists. Scars and loss of muscle strength exist. At least 1 billion people in the world currently live with some form of physical disability and nearly as many live with a diagnosed mental disorder.

Yet, the researchers expect us to believe theyve invented a one-size-fits-all algorithm for understanding humans. Theyre claiming theyve stumbled across a human trait that inextricably links the mental act of deceit with a singular universal physical expression. And they accomplished this by measuring muscle twitches in the faces of just 40 humans?

Per the aforementioned press release:

The researchers believe that their results can have dramatic implications in many spheres of our lives. In the future, the electrodes may become redundant, with video software trained to identify lies based on the actual movements of facial muscles.

So the big idea here is to generate data with one experimental paradigm (physical electrodes) in order to develop a methodology for a completely different experimental paradigm (computer vision)? And were supposed to believe that this particular mashup of disparate inputs will result in a system that can determine a humans truthfulness to such a degree that its outputs are admissible in court?

Thats a bold leap to make!The team may as well be claiming its solved AGI with black box deep learning. Computer vision already exists. Either the data from the electrodes is necessary or it isnt.

Whats worse, they apparentlyintend to develop this into a snake oil solution for governments and big businesses.

The press release continues with a quote:

[Team member Dino Levy] predicts: In the bank, in police interrogations, at the airport, or in online job interviews, high-resolution cameras trained to identify movements of facial muscles will be able to tell truthful statements from lies. Right now, our teams task is to complete the experimental stage, train our algorithms and do away with the electrodes. Once the technology has been perfected, we expect it to have numerous, highly diverse applications.

Exactly what percentage of those 40 study participants were Black, Latino, disabled, autistic, or queer? How can anyone, in good faith and conscience, make such grandiose scientific claims about AI based on such a tiny sprinkling of data?

If this AI solution were to actually become a product, people could potentially be falsely arrested, detained at airports, denied loans, and passed over for jobs because they dont look, sound, and act exactly like the people who participated in that study.

This AI system was only able to determine whether someone was lying with a 73% level of accuracy in an experiment where the lies were only one word long, meant nothing to the person saying them, and had no real effect on the person hearing them.

Theres no real-world scenario analogous to this experiment. And that 73% accuracy is as meaningless as a Tarot card spread or a Magic 8-Balls output.

Simply put: A 73% accuracy rate over less than 200 iterations of a study involving a maximum of 20 data groups (the participants were paired off) is a conclusion that indicates your experiment is a failure.

The world needs more research like this, dont get me wrong. Its important to test the boundaries of technology. But the claims made by the researchers are entirely outlandish and clearly aimed at an eventual product launch.

Sadly, theres about a 100% chance that this gets developed and ends up in use by US police officers.

Just like predictive-policing, Gaydar, hiring AI, and all the other snake oil AI solutions out there, this is absolutely harmful.

But, by all means, dont take my word for it: read the entire paper and the researchers own conclusions here.

Read the original post:

AI can't tell if you're lying anyone who says otherwise is selling something - TNW

Posted in Ai | Comments Off on AI can’t tell if you’re lying anyone who says otherwise is selling something – TNW

AI CoEs And Innovation Centres That Opened In India In 2021 – Analytics India Magazine

Posted: at 10:17 pm

Centres of Excellence (CoE) are places where companies can produce a strong competitive edge over AI. CoEs make adopting AI practices easier in an organisation and help form a structure for scaling and maintaining these processes.

Even during the pandemic, India retained itself as a favourable hub for establishing CoEs and innovation centres for AI.

Analytics India Magazine collates the AI CoEs and innovation hubs that were launched in India in 2021.

The Gem & Jewellery Skills Council of India (GJSCI) in August launched a Centre of Excellence (CoE) for Jewellery AI & Data-science Excellence (JADE) along with IIT-Bombay and Persistent Systems. The tech company and the institute will also help in the integration of AI, ML and data science techniques into gem and jewellery. The CoE aims to address the challenges of the supply chain of Indias gems and jewellery industry. The technology integration will help minimise rejections, improve the current poor hit ratio of designs, deliver high market returns and increase efficiencies.

Indian Institute of Technology, Kanpur, announced in June that they would form a Centre of Excellence in Artificial Intelligence & Innovation-Driven Entrepreneurship (AIIDE) at their Noida outreach campus. The UP-state cabinet cleared the proposal under the UP-Start-up Policy 2020. This centre was formed in association with FICCI (Federation of Indian Chambers of Commerce and Industry), which provides industry tie-ups and business connect. The CoE planned and focused sectors including cybersecurity, AI, IoT, and AR for development and even provided assistance to 50 startups each year.

The minister for IT/BT, Science and Technology, and Higher Education of Karnataka announced in October the opening of a centre of excellence (CoE) for AI and Data Engineering in Hubballi. This CoE was to be a part of the Karnataka Digital Economy Mission under the Beyond Bengaluru programme. The CoE aims at facilitating graduating startups to grow further.

Kotak-IISc AI-ML Centre

The Indian Institute of Science (IISc) in September set up a state-of-the-art Artificial Intelligence & Machine Learning (AI-ML) Centre in association with Kotak Mahindra Bank Limited (KMBL) at the IISc Bangalore campus. The centre that is called Kotak-IISc AI-ML Centre offers Bachelors, Masters, and short-term courses in AI, ML and deep learning, reinforcement learning, and Fintech. The centre also aims to promote innovation in AI and ML to develop a talent pool that can meet the industrys emerging requirements.

In July, the Arun Jaitley National Institute of Financial Management (AJNIFM), Haryana, in collaboration with Microsoft, formed a strategic partnership to build an Al and emerging technologies Centre of Excellence. The CoE seeks to explore cloud, Al and emerging technologies for shaping the future of public finance management in India. As part of the skilling effort, public sector officials will receive training on emerging technologies in finance management which will help them to address potential risks like money laundering, the role of responsible tech in finance and the use of machine learning models for decision making.

In November, KLA Corporation, a US-based semiconductor manufacturing company, launched two facilities in Chennai as part of its expansion into India. The AI-Advanced Computing Lab (AI-ALC) that had been operated in collaboration with the Indian Institute of Technology (IIT) Madras was established as a centre of excellence for AI-focused research and development.

The researchers and engineers at AI-ACL worked in collaboration with the AI experts at AI Modeling and Center of Excellence in Michigan and formed a global team to advance research in AI, image processing, software, and physics modelling.

In June, Massive Analytic, a UK-based precognition AI company, announced a 1 million investment in India to expand its employee base and build a CoE of AI and ML. This scaling up was led by Pankaj Arora, Lead Big Data Analytics Engineer. Under his guidance, the Massive Analytic India team runs with a focus on health and safety and assists the company to become a leader in medical diagnostics.

In October, Jishnu Dev Burman, Tripura Deputy Chief Minister, inaugurated the States first Innovation Hub at the Sukanta Academy. This project was an initiative of the National Council of Science Museums. The hub was expected to open here a few years ago but was delayed due to the pandemic. The hub has been a platform for students to engage in creative and innovative activities.

In February, Jai Ram Thakur, Chief Minister of Himachal Pradesh, inaugurated the Technology Innovation Hub at the Indian Institute of Technology (IIT) Mandi. The hub was developed with an investment of 110 crores. An MoU was also signed between the district administration and IIT Mandi for the development and deployment of the Landslide Monitoring System.

In September, Motorola Solutions set up its first research and innovation centre at Changa-based Charotar University of Science and Technology (CHARUSAT), Anand, Gujarat. The centre aims at offering practical opportunities and internships to students at the centre.

Corsight AI, an Israel-based company, signed an MoU with Assam Electronics Development Corporation Limited (AMTRON) in August to develop an Artificial Intelligence Centre of Excellence at a tech city situated near Guwahati. The CoE becomes a place for Corsight AI in assistance with AMTRON to establish a strong Face Recognition Technology Development and Services portfolio.

View post:

AI CoEs And Innovation Centres That Opened In India In 2021 - Analytics India Magazine

Posted in Ai | Comments Off on AI CoEs And Innovation Centres That Opened In India In 2021 – Analytics India Magazine

Otter.ai is limiting its free plan to 30-minute transcriptions starting December 1st – The Verge

Posted: at 10:17 pm

Otter.ai is adding new limitations to the free Otter Basic plan on the subscription service, which will limit free users to 30-minute transcriptions starting December 1st, down from the current 40-minute limit.

The company says that the changes are due to increased costs and to maintain high standards of service. Otter.ai is still offering the same 600 minutes of transcription per month to free users, however.

The change isnt hugely problematic in the long run, given that Otter.ai still isnt changing the total amount of transcription that free users can do just the amount that you can have transcribed in a single recording. That means that attentive users can simple just start a new file once the 30-minute limit is reached, although youll generally have to pay attention to when youre creeping up on the time limit (and lose the benefit of having everything in a single document).

Otter.ai customers who need the capability for longer recordings can pay for Otter Pro, which costs $99.99 per year or $12.99 per month. That price gets you transcriptions up to four hours long, up to 6,000 minutes of transcription each month, and additional features including custom vocabulary lists and the ability to import prerecorded files.

Read the rest here:

Otter.ai is limiting its free plan to 30-minute transcriptions starting December 1st - The Verge

Posted in Ai | Comments Off on Otter.ai is limiting its free plan to 30-minute transcriptions starting December 1st – The Verge

Page 87«..1020..86878889..100110..»