Artificial Intelligence is the Future of the Banking Industry Are You Prepared for It? – International Banker

By Pritham Shetty, Consulting Director, Propel Technology Group Inc

Our world is moving at a fast pace. Though banks originally built their foundations to be run solely by humans, the time has come forartificial intelligence in the banking industry. In 2020, the global AI banking market was valued at $3.88 billion, andit is projected to reach $64.03 billion by the end of the decade,with a compound annual growth rate of 32.6%. However, when it comes to implementing even the best strategies, theapplication of artificial intelligence in the banking industryis susceptible to weak core tech and poor data backbones.

By my count, there were 20,000 new banking regulatory requirements created in 2015 alone. Chances are your business wont find a one-size-fits-all solution to dealing with this. The next-best option is to be nimble. You need to be able to break down the business process into small chunks. By doing so, you can come up with digital strategies that work with new and existing regulations.

AIcan take you a long way in this process, but you must know how to harness its power. Take originating home loans, for instance. This can be an important, sometimes tedious, process for the loan seeker and bank. With an AI solution, loan origination can happen quicker and be more beneficial to both parties.

As the world of banking moves toward AI, it is integral to note that the crucial working element for AI is data. The trick to using that data is to understand how to leverage it best for your business value. Data with no direction wont lead to progress, nor will it lead to the proper deployment of your AI. That is one of the top reasons it isso challenging to implement AI in banks there has to be a plan.

Even if you come up with a poor strategy, those mistakes can be course-corrected over time. It takes some time and effort, but it is doable. If you home in on how customer information can be used, you can utilize AI for banking services in a way that is scalable and actionable. Once you understand how to use the data you collect, you can develop technical solutions that work with each other, identify specific needs, and build data pipelines that will lead you down the road to AI.

How is artificial intelligence changing the banking sector?

Due to the increasingly digital world, customers have more access to their banking information than ever. Of course, this can lead to other problems. Because there is so much access to data, there are also prime opportunities for fraudulent activities, and this is one example ofhow AI is changing the banking sector. With AI, you can train systems to learn, understand, and recognize when these activities happen. In fact, there was a5% decrease in record exposure from 2020 to 2021.

AI also safeguards against data theft or abuse. Not only can AI recognize breaches from outside sources, but it can also recognize internal threats. Once an AI system is trained, it can identify these problems and even offer solutions to them. For instance, a customer support call center can have traffic directed by AI to assuage an influx of calls during high-volume fluctuations.

Another great example of this is the development ofconversational AI platforms. The ubiquity of social media and other online platforms can be used to tailor customer experiences directly led by AI. By using the data gathered from all sources, AI can greatly improve the customer experience overall.

For example, a loan might take anywhere from seven to 45 days to be granted. But with AI, the process can be expedited not only for the customer, but also the bank. By using AI in a situation such as this, your bank can assess the risk it is taking on by servicing loans. It can also make the process faster by performing underwriting, document scanning, and other manual processes previously associated with data collection. On top of all that, AI can gather and analyze data about your customers behaviors throughout their banking lives.

In the past, so much of this work was done solely by people. Although automation has certainly helped speed up and simplify tasks, it is used for tedium and doesnt have the complexity of AI. AI saves time and money by freeing up your employees to do other processes and provides valuable insights to your customers. And customers can budget better and have a clearer idea of where their money is going.

Even the most traditional banks will want to adopt AI to save time and money and allow employees more opportunities to have positive one-on-one relationships with customers. Look no further than fintech companies such as Credijusto, Nubank, and Monzo that have digitized traditional banking services through the power of cutting-edge tech.

Are you ready to put AI to work for your business?

Today, its not a question ofhow AI is impacting financial services. Now, its about how to implement it. That all starts with you. You must ask the right questions: What are your goals for implementing AI? Do you want to improve your internal processes? Simply provide a better customer service experience? If so, how should you implement AI for your banking services? Start with these strategies:

By making realistic short-term goals, you set yourself up for future success. These are the solutions that will be the building blocks for the type of AI everyone will aspire to use.

You want to ensure that you know how you currently use data and how you plan on using it in the future. Again, this sets your organization up for success in the long run. If you dont have the right practices now, you certainly wont going forward.

As you implement AI into your banking practices, you should know how exactlyyou generate data. Then, you must understand how you interpret it. What is the best use for it? After that, you can make decisions that will be scalable, useful, and seamless.

Technology has not only made the world around us move faster, but also better in so many ways. Traditional institutions such as banks might be slow to adopt, but weve already seenhow artificial intelligence is changing the banking sector. By taking the proper steps, you could be moving right along with it into the future.

See the article here:

Artificial Intelligence is the Future of the Banking Industry Are You Prepared for It? - International Banker

Some Idiot Asked The Dall.E mini Artificial Intelligence Program What The Last Selfies Of Humans Will Look Like And Good News, We’re Definitely Headed…

Metro- A TikToker asked Dall.E mini, the popular Artificial Intelligence(AI) image generator, what the last selfies on earth would look like and the results are chilling.

In a series of videos titled Asking an Ai to show the last selfie ever taken in the apocalypse, a TikTok account called @robotoverloards, shared the disturbing images.

Each image shows a person taking a selfie set against an apocalyptic background featuring scenes of a nuclear wasteland and catastrophic weather, along with cities burning and even zombies.

Dall.Emini, now renamed toCraiyonis an AI model that can draw images from any text prompt.

The image generator uses artificial intelligence to make photos based on the text you put in.

The image generator is connected to an artificial intelligence that has, for some time, been scraping the web for images to learn what things are. Often it will draw this from the captions attached to the pictures.

What's up everybody? I'm back with my weekly "old man screaming at the clouds" rant about how artificial intelligence is going to wipe our species clean off the planet and it's blatantly telling us this and we continue to ignore it.

Look at this shit.

Does this look like a good time to anybody that's not "metal as fuck"?

No. Absolutely not.

What's worse than the disfiguration in all these beauties' selfies is the devastation in the landscapes behind them.

That shit looks like straight-up nuclear winter to my virgin eyes.

Call it skynet, Boston Robotics, Dall.E mini, whatever the fuck you want. Bottom line is its robot scum and our man Stephen Hawking told us years ago, and Elon Musk is telling us now, that A.I. is going to be the end all be all of homosapiens. That's us. And that's a fucking wrap.

p.s. - the only thing that could make a nuclear/zombie apocalypse worse is this song playing on repeat in your head

The rest is here:

Some Idiot Asked The Dall.E mini Artificial Intelligence Program What The Last Selfies Of Humans Will Look Like And Good News, We're Definitely Headed...

Elon Musk and Mark Zuckerberg Are Arguing About AI — But They’re Both Missing the Point – Entrepreneur

Free Webinar | August 16th

Find out how to optimize your website to give your customers experiences that will have the biggest ROI for your business. Register Now

In Silicon Valley this week, a debate about the potential dangers (or lack thereof) when it comes to artificial intelligencehas flared upbetween two tech billionaires.

Facebook CEO Mark Zuckerberg thinks that AI is going to make our lives better in the future,while SpaceX CEO Elon Musk believes that AI a fundamental risk to the existence of human civilization.

Whos right?

Related: Elon Musk Says Mark Zuckerberg's Understanding of AI Is 'Limited' After the Facebook CEO Called His Warnings 'Irresponsible'

Theyre both right, but theyre also both missing the point. The dangerous aspect of AI will always come from people and their use of it, not from the technology itself. Similar to advances in nuclear fusion, almost any kind of technological developments can be weaponized and used to cause damage if in the wrong hands. The regulation of machine intelligence advancements will play a central role in whether Musks doomsday prediction becomes a reality.

It would be wrong to say that Musk is hesitant to embrace the technology since all of this companies are direct beneficiaries of the advances in machine learning. Take Tesla for example, where self-driving capability is one of the biggest value adds for its cars. Musk himself even believes that one day it will be safer to populate roads with AI drivers rather than human ones, though publicly he hopes that society will not ban human drivers in the future in an effort to save us from human error.

What Musk is really pushing for here by being wary of AI technology is a more advanced hypothetical framework that we as a society should use to have more awareness regarding the threats that AI brings. Artificial General Intelligence (AGI), the kind that will make decisions on its own without any interference or guidance from humans, is still very far away from how things work today. The AGI that we see in the movies where robots take over the planet and destroy humanity is very different from the narrow AI that we use and iterate on within the industry now. In Zuckerbergs view, the doomsday conversation that Musk has sparked is a very exaggerated way of projecting how the future of our technology advancements would look like.

Related: The Future of Productivity: AI and Machine Learning

While there is not much discussion in our government about apocalypse scenarios, there is definitely a conversation happening about preventing the potentially harmful impacts on society from artificial intelligence. White House recently released a couple of reports on the future of artificial intelligence and on the economic effects it causes. The focus of these reports is on the future of work, job marketsand research on increasing inequality that machine intelligence may bring.

There is also an attempt to tackle a very important issue of explainability when it comes to understanding actions that machine intelligence does and decisions it presents to us. For example, DARPA (Defense Advanced Research Projects Agency), an agency within the U.S. Department of Defense, is funneling billions of dollars into projects that would pilot vehicles and aircraft, identify targets and even eliminate them on autopilot. If you thought the use of drone warfare was controversial, AI warfare will be even more so. Thats why here its even more important, maybe even more than in any other field, to be mindful of the results AI presents.

Explainable AI (XAI), the initiative funded by DARPA, aims to create a suite of machine learning techniques that produce more explainable results to human operators and still maintain a high level of learning performance. The other goal of XAI is to enable human users to understand, appropriately trust and effectively manage the emerging generation of artificially intelligent partners.

Related: Would You Fly on an AI-Backed Plane Without a Pilot?

The XAI initiative can also help the government tackle the problem of ethics with more transparency. Sometimes developers of software have conscious or unconscious biases that eventually are built into an algorithm -- the wayNikon camera became internet famous for detecting someone blinking when pointed at the face of an Asian personor HP computers were proclaimed racist for not detecting black faces on the camera. Even developers with the best intentions can inadvertently produce systems with biased results, which is why, as the White House report states,AI needs good data. If the data is incomplete or biased, AI can exacerbate problems of bias.

Even with the positive use cases, the data bias can cause a lot of serious harm to society. Take Chinas recent initiative to use machine intelligence to predict and prevent crime. Of course, it makes sense to deploy complex algorithms that can spot a terrorist and prevent crime, but a lot of bad scenarios can happen if there is an existing bias in the training data for those algorithms.

It important to note that most of these risks already exist in our lives in some form or another, like when patients are misdiagnosed with cancer and not treated accordingly by doctors or when police officers make intuitive decisions under chaotic conditions. The scale and lack of explainability of machine intelligence will magnify our exposure to these risks and raise a lot of uncomfortable ethical questions like, who is responsible for a wrong prescription by an automated diagnosing AI? A doctor? A developer? Training data provider? This is why complex regulation will be needed to help navigate these issues and provide a framework for resolving the uncomfortable scenarios that AI will inevitably bring into society.

Artur Kiulian, M.S.A.I., is a partner at Colab, a Los Angeles-based venture studio that helps startups build technology products using the benefits of machine learning. An expert in artificial intelligence, Kiulian is the author of Robot is...

Follow this link:

Elon Musk and Mark Zuckerberg Are Arguing About AI -- But They're Both Missing the Point - Entrepreneur

Here’s your dose of AI-generated uncanny valley for today – The Verge

As we get better at making, faking, and manipulating human faces with machine learning, one thing is abundantly clear: things are going to get ~freaky~ fast.

Case in point: this online demo hosted (and, we presume, made) by web developer AlteredQualia. It combines two different research projects, both of which use neural networks. The first is DeepWarp, which alters where subjects in photographs are looking, and the second is a work in progress by Mike Tyka dubbed Portraits of Imaginary People. This does exactly what it says on the tin: feeding a generative neural network with a bunch of faces and getting it to create similar samples.

Combine it with a tool for making eyes follow your cursor, and you have a healthy slice of the uncanny valley, the phenomenon of human perception where something looks human but not quite human enough. Here are some more examples from Tykas project:

As weve written in the past, this sort of image is only going to become more common as machine learning and AI proliferate. Neural networks are easy enough for lots of people to play with, and are improving all the time. In this case, thats going to mean more and more near-photorealistic and photorealistic fake humans. If the artificial intelligence boom were currently experiencing has to have a face, this is it.

See original here:

Here's your dose of AI-generated uncanny valley for today - The Verge

What the White House’s ‘AI Bill of Rights’ blueprint could mean for HR tech – HR Dive

Over the last decade, the use of artificial intelligence in areas like hiring, recruiting and workplace surveillance has shifted from a topic of speculation to a tangible reality for many workplaces. Now, those technologies have the attention of the highest office in the land.

On Oct. 4, the White Houses Office of Science and Technology Policy published a Blueprint for an AI Bill of Rights, a 73-page document outlining guidance on addressing bias and discrimination in automated technologies so that protections are embedded from the beginning, where marginalized communities have a voice in the development process, and designers work hard to ensure the benefits of technology reach all people.

The blueprint focuses on five areas of protections for U.S. citizens in relation to AI: system safety and effectiveness; algorithmic discrimination; data privacy; notice and explanation when an automated system is used; and access to human alternatives when appropriate. It also follows the publication in May of two cautionary documents by the U.S. Equal Employment Opportunity Commission and the U.S. Department of Justice specifically addressing the use of algorithmic decision-making tools in hiring and other employment actions.

Employment is listed in the blueprint as one of several sensitive domains deserving of enhanced data and privacy protections. Individuals handling sensitive employment information should ensure it is only used for functions strictly necessary for that domain while consent for all non-necessary functions should be optional.

Additionally, the blueprint states that continuous surveillance and monitoring systems should not be used in physical or digital workplaces, regardless of a persons employment status. Surveillance is particularly sensitive in the union context; the blueprint notes that federal law requires employers, and any consultants they may retain, to report the costs of surveilling employees in the context of a labor dispute, providing a transparency mechanism to help protect worker organizing.

The prevalence of employment-focused AI and automation may depend on the size and type of organization studied, though research suggests a sizable portion of employers have adopted the tech.

For example, a February survey by the Society for Human Resource Management found that nearly one-quarter of employers used such tools, including 42% of employers with more than 5,000 employees. Of all respondents utilizing AI or automation, 79% said they were using this technology for recruitment and hiring, the most common such application cited, SHRM said.

Similarly, a 2020 Mercer study found that 79% of employers were either already using, or planned to start using that year, algorithms to identify top candidates based on publicly available information. But AI has applications extending beyond recruiting and hiring. Mercer found that most respondents said they were also using the tech to handle employee self-service processes, conduct performance management and onboard workers, among other needs.

Employers should note that the blueprint is not legally binding, does not constitute official U.S. government policy and is not necessarily indicative of future policy, said Niloy Ray, shareholder at management-side firm Littler Mendelson. Though the principles contained in the document may be appropriate for AI and automation systems to follow, the blueprint is not prescriptive, he added.

It helps add to the scholarship and thought leadership in the area, certainly, Ray said. But it does not rise to the level of some law or regulation.

Employers may benefit from a single federal standard for AI technologies, Ray said, particularly given that this is an active legislative area for a handful of jurisdictions. A New York City law restricting the use of AI in hiring will take effect next year. Meanwhile, a similar law has been proposed in Washington, D.C., and Californias Fair Employment and Housing Council has proposed regulations on the use of automated decision systems.

Then there is the international regulatory landscape, which can pose even more challenges, Ray said. Because of the complexity involved, Ray added that employers might want to see more discussion around a unified federal standard, and the Biden administrations blueprint may be a way of jump-starting that discussion.

Lets not have to jump through 55 sets of hoops, Ray said of the potential for a federal standard. Lets have one set of hoops to jump through.

The blueprints inclusion of standards around data privacy and other areas may be important for employers to consider, as AI and automation platforms used for hiring often take into account publicly available data that job candidates do not realize is being used for screening purposes, said Julia Stoyanovich, co-founder and director at New York Universitys Center for Responsible AI.

Stoyanovich is co-author on an August paper in which a group of NYU researchers detailed their analysis of two personality tests used by two automated hiring vendors, Humantic AI and Crystal. The analysis found that the platforms exhibited substantial instability on key facets of measurement and concluded that they cannot be considered valid personality assessment instruments.

Even before AI is introduced into the equation, the idea that a personality profile of a candidate could be a predictor of job performance is a controversial one, Stoyanovich said. Laws like New York Citys could help to provide more transparency on how automated hiring platforms work, she added, and could provide HR teams a better idea of whether tools truly serve their intended purposes.

The fact that we are starting to regulate this space is really good news for employers, Stoyanovich said. We know that there are tools that are proliferating that dont work, and it doesnt benefit anyone except for the companies that are making money selling these tools.

See the original post here:

What the White House's 'AI Bill of Rights' blueprint could mean for HR tech - HR Dive

A Humanoid Robot Gave a Lecture in a West Point Philosophy Course

Professor Robot

A teacher with a robotic voice can make paying attention in class seem like an impossible task. But students at West Point seemingly had no problem staying focused while learning from an actual robot.

On Tuesday, an AI-powered robot named Bina48 co-taught two sessions of an intro to ethics philosophy course at the prominent military school. And while it might not have a career ahead of it as a college professor, the robot could find itself one day helping mold the minds of younger or less-educated students.

Bina 2.0

To prepare Bina48 to co-teach the West Point students, the bot’s developers fed it information on war theory and political philosophy, as well as the course lesson plan. When it was the robot’s turn to teach, Bina48 delivered a lecture based on this background information before taking questions from students, who seemed to appreciate their time learning from the bot.

“Before the class, they thought it might be too gimmicky or be entertainment,” William Barry, the course’s professor, told Axios. “They were blown away because she was able to answer questions and reply with nuance. The interesting part was that [the cadets] were taking notes.”

AI Education

Bina48 may have shared a few points worth jotting down, but it wasn’t able to teach at the students’ typical pace. In the future, the bot might be a better fit for classes with younger or less-educated students.

Indeed, the world is facing a shortage of teachers, and others have suggested letting AIs fill in in places where flesh-and-bone educators are scarce. Ultimately, Bina48’s work with the West Point cadets could foreshadow a future in which AIs teach students across the globe about everything from ethics to energy.

READ MORE: This Robot Co-Taught a Course at West Point [Axios]

More on Bina48: Six Life-Like Robots That Prove the Future of Human Evolution Is Synthetic

Read more:

A Humanoid Robot Gave a Lecture in a West Point Philosophy Course

Artificial Intelligence: How realistic is the claim that AI will change our lives? – Bangkok Post

Artificial Intelligence: How realistic is the claim that AI will change our lives?

Artificial Intelligence (AI) stakes a claim on productivity, corporate dominance, and economic prosperity with Shakespearean drama. AI will change the way you work and spend your leisure time and puts a claim on your identity.

First, an AI primer.

Let's define intelligence, before we get onto the artificial kind. Intelligence is the ability to learn. Our senses absorb data about the world around us. We can take a few data points and make conceptual leaps. We see light, feel heat, and infer the notion of "summer."

Our expressive abilities provide feedback, i.e., our data outputs. Intelligence is built on data. When children play, they engage in endless feedback loops through which they learn.

Computers too, are deemed intelligent if they can compute, conceptualise, see and speak. A particularly fruitful area of AI is getting machines to enjoy the same sensory experiences that we have. Machines can do this, but they require vast amounts of data. They do it by brute force, not cleverness. For example, they determine the image of a cat by breaking pixel data into little steps and repeat until done.

Key point: What we do and what machines do is not so different, but AI is more about data and repetition than it is about reasoning. Machines figure things out mathematically, not visually.

AI is a suite of technologies (machines and programs) that have predictive power, and some degree of autonomous learning.

AI consists of three building blocks:

An algorithm is a set of rules to be followed when solving a problem. The speed of the volume of data that can be fed into algorithms is more important than the "smartness" of algorithms.

Let's examine these three parts of the AI process:

The raw ingredient of intelligence is data. Data is learning potential. AI is mostly about creating value through data. Data has become a core business value when insights can be extracted. The more you have, the more you can do. Companies with a Big Data mind-set don't mind filtering through lots of low value data. The power is in the aggregation of data.

Building quality datasets for input is critical too, so human effort must first be spent obtaining, preparing and cleaning data. The computer does the calculations and provides the answers, or output.

Conceptually, Machine Learning (ML) is the ability to learn a task without being explicitly programmed to do so. ML encompasses algorithms and techniques that are used in classification, regression, clustering or anomaly detection.

ML relies on feedback loops. The data is used to make a model, and then test how well that model fits the data. The model is revised to make it fit the data better, and repeated until the model cannot be improved anymore. Algorithms can be trained with past data to find patterns and make predictions.

Key point: AI expands the set of tools that we have to gain a better grasp of finding trends or structure in data, and make predictions. Machines can scale way beyond human capacity when data is plentiful.

Prediction is the core purpose of ML. For example, banks want to predict fraudulent transactions. Telecoms want to predict churn. Retailers want to predict customer preferences. AI-enabled businesses make their data assets a strategic differentiator.

Prediction is not just about the future; it's about filling in knowledge gaps and reducing uncertainty. Prediction lets us generalise, an essential form of intelligence. Prediction and intelligence are tied at the hip.

Let's examine the wider changes unfolding.

AI increases our productivity. The question is how we distribute the resources. If AI-enhanced production only requires a few people, what does that mean for income distribution? All the uncertainties are on how the productivity benefits will be distributed, not how large they will be.

Caution:

ML is already pervasive in the internet. Will the democratisation of access brought on by the internet continue to favour global monopolies? Unprecedented economic power rests in a few companies you can guess which ones with global reach. Can the power of channelling our collective intelligence continue to be held by these companies that are positioned to influence our private interests with their economic interests?

Nobody knows if AI will produce more wealth or economic precariousness. Absent various regulatory measures, it is inevitable that it will increase inequality and create new social gaps.

Let's examine the impact on everyone.

As with all technology advancements, there will be changes in employment: the number of people employed, the nature of jobs and the satisfaction we will derive from them. However, with AI all classes of labour are under threat, including management. Professions involving analysis and decision-making will become the providence of machines.

New positions will be created, but nobody really knows if new jobs will sufficiently replace former ones.

We will shift more to creative or empathetic pursuits. To the extent of income shortfall, should we be rewarded for contributing in our small ways to the collective intelligence? Universal basic income is one option, though it remains theoretical.

Our consumption of data (mobile phones, web-clicks, sensors) provides a digital trail that is fed into corporate and governmental computers. For governments, AI opens new doors to perform surveillance, predictive policing, and social shaming. For corporates, it's not clear whether surveillance capitalism, the commercialisation of your personal data, will be personalised to you, or for you. Will it direct you where they want you to go, rather than where you want to go?

How will your data be a measure of you?

The interesting angle emerging is whether we will be hackable. Thats when the AI knows more about you than yourself. At that point you become completely influenceable because you can be made to think and to react as directed by governments and corporates.

We do need artificial forms of intelligence because our prediction abilities are limited, especially when handling big data and multiple variables. But for all its stunning accomplishments, AI remains very specific. Learning machines are circumscribed to very narrow areas of learning. The Deep Mind that wins systematically at Go can't eat soup with a spoon or predict the next financial crises.

Filtering and personalisation engines have the potential to both accommodate and exploit our interests. The degree of change will be propelled, and restrained, by new regulatory priorities. The law always lags behind technology, so expect the slings and arrows of our outrageous fortune.

Author: Greg Beatty, J.D., Business Development Consultant. For further information please contact gregfieldbeatty@gmail.com

Series Editor: Christopher F. Bruton, Executive Director, Dataconsult Ltd, chris@dataconsult.co.th. Dataconsult's Thailand Regional Forum provides seminars and extensive documentation to update business on future trends in Thailand and in the Mekong Region.

Visit link:

Artificial Intelligence: How realistic is the claim that AI will change our lives? - Bangkok Post

The North America artificial intelligence in healthcare diagnosis market is projected to reach from US$ 1,716.42 million in 2019 to US$ 32,009.61…

New York, Sept. 30, 2020 (GLOBE NEWSWIRE) -- Reportlinker.com announces the release of the report "North America Artificial Intelligence in Healthcare Diagnosis Market Forecast to 2027 - COVID-19 Impact and Regional Analysis by Diagnostic Tool ; Application ; End User ; Service ; and Country" - https://www.reportlinker.com/p05974389/?utm_source=GNW

The healthcare industry has always been a leader in innovation.The constant mutating of diseases and viruses makes it difficult to stay ahead of the curve.

However, with the help of artificial intelligence and machine learning algorithms, it continues to advance, creating new treatments and helping people live longer and healthier.A study published by The Lancet Digital Health compared the performance of deep learning a form of artificial intelligence (AI) in detecting diseases from medical imaging versus that of healthcare professionals, using a sample of studies carried out between 2012 and 2019.

The study found that, in the past few years, AI has become more precise in identifying disease diagnosis in these images and has become a more feasible source of diagnostic information.With advancements in AI, deep learning may become even more efficient in identifying diagnosis in the coming years.

Moreover, it can help doctors with diagnoses and notify when patients are weakening so that the medical intervention can occur sooner before the patient needs hospitalization. It can save costs for both the hospitals and patients. Additionally, the precision of machine learning can detect diseases such as cancer quickly, thus saving lives.In 2019, the medical imaging toolsegment accounted for a larger share of the North America artificial intelligence in healthcare diagnosis market. Its growth is attributed to an increasing adoption of AI technology for diagnosis of chronic conditions is likely to drive the growth of diagnostic tool segment in the North America artificial intelligence in healthcare diagnosis.In 2019, the radiology segment held a considerable share of the for North America artificial intelligence in healthcare diagnosis market, by the application. This segment is also predicted to dominate the market by2027 owing to rising demand for AI based application for radiology.A few major primary and secondary sources for the artificial intelligence in healthcare diagnosis market included US Food and Drug Administration, and World Health Organization.Read the full report: https://www.reportlinker.com/p05974389/?utm_source=GNW

About ReportlinkerReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.

__________________________

Go here to see the original:

The North America artificial intelligence in healthcare diagnosis market is projected to reach from US$ 1,716.42 million in 2019 to US$ 32,009.61...

China aims to become global AI leader by 2030 – ZDNet

China's top administrative body has laid out a three-step approach to make artificial intelligence (AI) the key driving force of the country's economic growth for the next decade.

According to the plan initiated by the State Council and released last week, China will first keep pace with other leading countries in terms of AI technology and applications by 2020, aiming for a core AI industry worth 150 billion yuan ($22 billion) and AI-related fields worth 1 trillion yuan, according to a Tencent news report.

After the conclusion of the second phase by 2025 when legal grounds for the industry are established, the government plans to be the global leader in AI theory, technology, and applications and the major AI innovation centre globally by 2030. At which time, the core AI industry will value 1 trillion yuan and AI-related industries 10 trillion yuan, according to the blueprint.

The government has also pushed for vigorous development of AI-related emerging industries in China, including intelligent hardware and software, intelligent robots, and Internet of Things based devices.

Research on brain science, brain computing, quantum information and quantum computing, intelligent manufacturing, robotics, and big data will be greatly upheld, while intelligent upgrades in manufacture, agriculture, logistics, and home appliances will also be sped up.

A PwC report released last month has estimated the global GDP will become 14 percent higher in 2030 due to the wide deployment of AI.

"China will begin to pull ahead of the US's AI productivity gains in 10 years," the report said, and estimated that China will have the most economic gains from AI, which may boost China's GDP by 26 percent by 2030.

Chinese companies Alibaba, Baidu, and Lenovo are stepping up AI investment in a range of industries such as ecommerce, IoT, and autonomous driving.

Baidu announced the acquisition of Seattle-based startup Kitt.ai and a partnership with US chipmaker Nvidia this month, while Alibaba recently revealed an AI-powered smart speaker.

Lenovo also said AI will be a key feature of its products going forward, which include a digital assistant, connected health devices, and augmented and virtual reality platforms.

See the rest here:

China aims to become global AI leader by 2030 - ZDNet

COVID-19 Is Accelerating AI in Health Care. Are Federal Agencies Ready? – Nextgov

Artificial intelligence is rapidly expanding its foothold in health care, including at many federal health agencies such as Veterans Affairs and Health and Human Services departments and the Defense Health Agency.

The ongoing coronavirus pandemic is demonstrating the power of AI-enabled capabilities for private and public sector health care organizations responsible for responding to todays health care challenges.

For example, the pandemic has catalyzed numerous AI-enabled development efforts for vaccines. After scientists decoded the genetic sequence of SARS-CoV-2the virus causing COVID-19and publicly posted the results on January 10, the race was on. Based on that data, firms began using AI-enabled methods to rapidly develop potential vaccines, some of which are already proceeding to clinical trials. By comparison, traditional non-AI drug development processes take many months, if not years, to proceed to human clinical trials.

Likewise, federal health agencies are also incorporating AI-enabled responses. The Centers for Disease Control and Prevention, for example, is hosting an AI-driven bot on its website to help screen people for coronavirus infections as a way to reduce the numbers of patients flocking to increasingly overwhelmed urgent care facilities.

Additionally, the Food and Drug Administration recently approved use of an AI-driven diagnostic for COVID-19 developed by behold.ai. The tool analyzes lung x-rays and provides radiologists with a tentative diagnosis as soon as the image is captured, reducing time and expense.

But there is an important caveat to this activity: we dont yet know whether these and other non-AI related efforts will produce the long-term impact we are all hoping for.

In a milestone report published in December titled Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril, the National Academy of Medicine noted the many ways in which AI is revolutionizing health care. However, they also warned that careful planning and implementation is required to avoid the risk of a backlashor an AI winter, as some refer to itthat can occur when hyped AI solutions fail to deliver expected performance or benefits.

Federal and defense health care agencies will be expected to mobilize more quickly and ensure that AI solutions produce results. So how can federal health care agencies improve their odds of success? How can they implement and scale AI projects and, more importantly, try to realize AIs vast potential to improve healthcare while lowering costs?

Based on successful public and private sector AI implementation, federal and defense agencies can achieve greater success in their AI deployments if they:

Have a strategic plan for AI. Select the purpose and focus of initial efforts with care and clearly define business challenges warranting AI adoption. That means, in part, identifying use cases that provide a significant return on investment. Moreover, the plan should also provide a means to address the agencys readiness to leverage AI as there are several dimensions to readiness. For example, technology readiness refers to having the needed tools, technical infrastructure and data management strategies and capabilities in place. Workforce readiness refers to having needed talent recruitment and development, training, incentives, communications and change management structures and programs in place to successfully launch and sustain AI.

Understand your requirements and then phase solutions, from simpler to more complex. Amid the variety of AI solution typessuch as task automation, pattern recognition or contextual reasoningorganizations will need to investigate the requirements of different user groups or use cases, technical and analytic complexities, and the ability to scale and sustain solutions across the enterprise. For example, robotic process automation is a relatively easy AI solution to implement to complete time-consuming, repetitive tasks such as data entry, data capture and data transferal from one source to another source. RPA can then serve as an easy gateway for the organization to tackle more advanced automation leveraging AI.

Use an agile approach and develop iteratively. Such an approach can strengthen efforts to engage users and build trust, and there has to be a level of risk tolerance for this approach to work. Agile methodology can be helpful for facilitating collaboration and adoption. Central to this approach is adaptive planning, evolutionary development, early delivery, continuous improvement, and rapid and flexible response to change, which inherently allows for a fail fast element to quickly identify success or failure.

AI is a human endeavor. People must bring needed leadership, accountability, motivation and expertise to the project both before and after it becomes operational and, later, as it scales. Having humans in the loop ensures better integration into work processes, builds trust, and creates accountabilities for the performance of AI solutions.

There are many factors that contribute to a projects success, but these considerations can be key as agencies strive to harness AI more fully in support of their missions.

Philip Dietz, MBA, is a principal at Booz Allen Hamilton leading data science and analytics.

Read more from the original source:

COVID-19 Is Accelerating AI in Health Care. Are Federal Agencies Ready? - Nextgov

AI allows paralyzed person to ‘handwrite’ with his mind – Science Magazine

By Kelly ServickOct. 23, 2019 , 12:05 PM

CHICAGO, ILLINOISBy harnessing the power of imagination, researchers have nearly doubled the speed at which completely paralyzed patients may be able to communicate with the outside world.

People who are locked infully paralyzed by stroke or neurological diseasehave trouble trying to communicate even a single sentence. Electrodes implanted in a part of the brain involved in motion have allowed some paralyzed patients to move a cursor and select onscreen letters with their thoughts. Users have typed up to 39 characters per minute, but thats still about three times slower than natural handwriting.

In the new experiments, a volunteer paralyzed from the neck down instead imagined moving hisarm to write each letter of the alphabet. That brain activity helped train a computer model known as a neural network to interpret the commands, tracing the intended trajectory of his imagined pen tip to create letters (above).

Eventually, the computer could read out the volunteers imagined sentences with roughly 95% accuracy at a speed of about 66 characters per minute, the team reported here this week at the annual meeting of the Society for Neuroscience.

The researchers expect the speed to increase with more practice. As they refine the technology, they will also use their neural recordings to better understand how the brain plans and orchestrates fine motor movements.

See the original post here:

AI allows paralyzed person to 'handwrite' with his mind - Science Magazine

Google is using AI to design chips that will accelerate AI – MIT Technology Review

A new reinforcement-learning algorithm has learned to optimize the placement of components on a computer chip to make it more efficient and less power-hungry.

3D Tetris: Chip placement, also known as chip floor planning, is a complex three-dimensional design problem. It requires the careful configuration of hundreds, sometimes thousands, of components across multiple layers in a constrained area. Traditionally, engineers will manually design configurations that minimize the amount of wire used between components as a proxy for efficiency. They then use electronic design automation software to simulate and verify their performance, which can take up to 30 hours for a single floor plan.

Time lag: Because of the time investment put into each chip design, chips are traditionally supposed to last between two and five years. But as machine-learning algorithms have rapidly advanced, the need for new chip architectures has also accelerated. In recent years, several algorithms for optimizing chip floor planning have sought to speed up the design process, but theyve been limited in their ability to optimize across multiple goals, including the chips power draw, computational performance, and area.

Intelligent design: In response to these challenges, Google researchers Anna Goldie and Azalia Mirhoseini took a new approach: reinforcement learning. Reinforcement-learning algorithms use positive and negative feedback to learn complicated tasks. So the researchers designed whats known as a reward function to punish and reward the algorithm according to the performance of its designs. The algorithm then produced tens to hundreds of thousands of new designs, each within a fraction of a second, and evaluated them using the reward function. Over time, it converged on a final strategy for placing chip components in an optimal way.

Validation: After checking the designs with the electronic design automation software, the researchers found that many of the algorithms floor plans performed better than those designed by human engineers. It also taught its human counterparts some new tricks, the researchers said.

Production line: Throughout the field's history, progress in AI has been tightly interlinked with progress in chip design. The hope is this algorithm will speed up the chip design process and lead to a new generation of improved architectures, in turn accelerating AI advancement.

To have more stories like this delivered directly to your inbox,sign upfor our Webby-nominated AI newsletter The Algorithm. It's free.

View post:

Google is using AI to design chips that will accelerate AI - MIT Technology Review

GNS Healthcare Presents Novel Use of AI to Identify Drivers of Response to Immune Checkpoint Inhibitor Therapy – PRNewswire

CAMBRIDGE, Mass., July 21, 2020 /PRNewswire/ --GNS Healthcare(GNS), a leading AI and simulation company, presents results that validate the use of AI to accurately classify tumors based on their immunogenicity and predict response to immune checkpoint inhibitor (ICI) therapy using real-world data. The study showcases the power of causal AI to capture biomarkers and mechanisms, in addition to PD(L)1 and tumor mutation burden (TMB), that are consistent with known immunology. These markers, including CXCL13 upregulation and STK11 mutation, are in line with the targets that are currently being explored for stratification of responders vs. non-responders to ICI therapy, cohort selection, enrichment of future immunoncology trials, or ICI efficacy improvement through combination therapy.

The study applied AI to tumor data from The Cancer Genome Atlas (TCGA) to identify the drivers of immune response. The data from nearly 700 NSCLC and over 400 HNSCC patients were fed into REFS, GNS's causal AI and simulation platform, which reverse-engineered in silico patients which accurately classified tumors based on their response. Macrophage activation and polarization, which is driven in part by metabolic reprogramming, was identified as the primary driver of tumor immunogenicity which can allow for a more targeted approach to patient care and clinical trial design.

"Over the past decade we have seen nearly a dozen immuno-oncology treatments approved but treatment protocols are still based only on a few biomarkers. The presentation of our work is not only a validation of how AI can extract critical insights from real-world data, but also a milestone in our mission to make precision oncology a reality," said Colin Hill, GNS Healthcare CEO and Co-Founder.

These findings from these in silico patients can be used by biopharma companies to select optimal patient populations for clinical trials based on likelihood of response and discover novel biomarkers that make tumors more susceptible to immune therapy, irrespective of response to PD(L)1 therapy. The findings are also beginning to unlock the value of investments in real-world and clinical data to inform future trial design, enable discovery of novel drug targets, and better position drugs across global markets.

Listen to a deep-dive webinar discussing the results here or view the poster presented at ASCO-SITC and reach out to the GNS Healthcare team to learn more.

About GNS Healthcare: GNS Healthcare is an AI-driven precision medicine company developingin silicopatients from real-world and clinical data. In silico patients reveal the complex system of interactions underlying disease progression and drug response, enabling the simulation of drug response at the individual patient level. This in turn enables the ability to precisely match therapeutics to patients and rapidly discover key insights across drug discovery, clinical development, commercialization, and payer markets. GNS REFS causal AI and simulation technology integrates and transforms a wide-variety of patient data types into in silico patients across oncology, auto-immune diseases, neurology, and cardio-metabolic diseases. GNS partners with the world's leading biopharmaceutical companies and health plans and has validated its science and technology in over 50 peer-reviewed papers and abstracts. https://gnshealthcare.com

Media Contact:Simona GilmanMarketing[emailprotected]

SOURCE GNS Healthcare

http://gnshealthcare.com

See original here:

GNS Healthcare Presents Novel Use of AI to Identify Drivers of Response to Immune Checkpoint Inhibitor Therapy - PRNewswire

The future of EHRs: Google AI head on tossing out the keyboard + innovating data search – Becker’s Hospital Review

While clinicians have often expressed frustration over the way they have to interact with EHRs, Google is working on technology for streamlining functions like data searches and predictive text search, according to Google artificial intelligence head Jeff Dean, PhD.

During a recent episode of a podcast by Eric Topol, MD, and Abraham Verghese, MD, "Medicine and the Machine," Dr. Dean discussed his predictions for how EHRs will evolve in healthcare and some of Google's current projects.

Here are six insights from Dr. Dean, cited in an Aug. 20 Medscape report.

1. Google has worked with other organizations on using deidentified data to refine EHR searches in a way similar to how the tech company trains natural language models, Dr. Dean said. With the natural language models, the researchers aim to use the prefix of a piece of text to predict the next word or sequence of words that is going to occur.

2. An example of natural language models would be a model applied to email messages, so when a person is typing out a message, the AI suggests how they might complete the sentence to save typing, Dr. Dean said.

3. Google is working with the same approach to give clinicians suggestions about what might occur next in the EHR for a particular patient, Dr. Dean said, adding, "If you think about the medical record as a whole sequence of events, and if you have de-identified medical records, you can take a prefix of a medical record and try to predict either the individual events or maybe some high-level attributes about subsequent events, like, 'Will this patient develop diabetes within the next 12 months?'"

4. While the idea of creating an AI model that uses every past medical decision to help inform all future medical decisions is complicated, Dr. Dean said the feat is a "good north star" for potential health IT innovations.

5. Dr. Dean said his group has done some work using an audio recording of a patient-physician conversation to develop a medical note that a clinician can just then edit a little bit instead of having to type up the entire note.

6. Creating summarized notes from conversations might also be a good assistant tool that not only helps reduce clinician burden but could lead to higher-quality data in the EHR, according to Dr. Dean.

"We all know that often clinicians copy and paste the most recent note and don't really edit it appropriately. That's partly because it's very cumbersome and unwieldy to interact with some of these systems, and speech and voice are a more natural way of creating notes," Dr. Dean said.

See the original post here:

The future of EHRs: Google AI head on tossing out the keyboard + innovating data search - Becker's Hospital Review

AI’s Factions Get Feisty. But Really, They’re All on the Same Team – WIRED

Slide: 1 / of 1. Caption: Getty Images

Artificial intelligence is not one thing, but many, spanning several schools of thought. In his book The Master Algorithm, Pedro Domingos calls them the tribes of AI.

As the University of Washington computer scientist explains, each tribe fashions what would seem to be very different technology. Evolutionists, for example, believe they can build AI by recreating natural selection in the digital realm. Symbolists spend their time coding specific knowledge into machines, one rule at a time.

Right now, the connectionists get all the press. They nurtured the rise of deep neural networks, the pattern recognition systems reinventing the likes of Google, Facebook, and Microsoft. But whatever the press says, the other tribes will play their own role in the rise of AI.

Take Ben Vigoda, the CEO and founder of Gamalon. Hesa Bayesian, part of the tribe that believes in creating AI through the scientific method. Rather than building neural networks that analyze data and reach conclusions on their own, he and his team useprobabilistic programming, a technique in which they start with their own hypotheses and then use data to refine them. His startup, backed by Darpa, emerged from stealth mode this morning.

Gamalons tech can translate from one language to another, and the company isdevelopingtools that businesses can use to extract meaning from raw streams of text. Vigoda claims his particular breed of probabilistic programming can produce AI that learns more quickly than neural networks, using much smaller amounts of data. You can be very careful about what you teach it, he says, and can edit what youve taught it.

As others point out, an approach along these lines is essential to the rise of machines capable of truly thinking like humans. Neural networks require enormous amounts of carefully labelled data, and this isnt always available. Vigoda even goes so far as to say that his techniques will replace neural networks completely, in all applications. That is very, very clear, he says.

But just as deep learning isnt the only way to artificial intelligence, neither is probabilistic programming. Or Gaussian processes. Or evolutionary computation. Or reinforcement learning.

Sometimes, the AI tribesbadmouth each other. Sometimes, they play up their technology at the expense of the others. But the reality is that AI will risefrom many technologies working together. Despite the competition, everyone is working toward the same goal.

Probabilistic programming lets researchers build machine learning algorithms more like coders build computer programs. But the real power of the technique lies inits ability to deal with uncertainty. This can allow AI to learn from less data, but it can also helpresearchers understand why an AI reaches particular decisionsand more easily tweak the AI if they dont agree with those decisions. True AI will need all that, whether it powers a chatbot trying to carry on a human-like conversation or an autonomous car trying to avoid an accident.

But neural networks have proven their worth with, among other things, image and speech recognition, and theyre not necessarily in competition with techniques like probabilistic programming. In fact, Google researchers are building systems that combine the two. Their strengths complement one another. Deep neural networks and probabilistic models are closely related, says David Blei, a Columbia University computer scientist and an advisor to Gamalon who has worked with Google research on these types of mixed models. Theres a lot of probabilistic modeling happening inside neural networks.

Inevitably, the best AI will combine several technologies. Take AlphaGo, the breakthrough system built by Googles DeepMind lab. It combined neural networks with reinforcement learning and other techniques. Blei, for one, doesnt see a world oftribes. It doesnt exist for me, he says. He sees a world in which everyone is reaching for the same master algorithm.

Here is the original post:

AI's Factions Get Feisty. But Really, They're All on the Same Team - WIRED

Even an AI machine couldn’t ace China’s super tough college entrance exam – Mashable


Mashable
Even an AI machine couldn't ace China's super tough college entrance exam
Mashable
An AI machine that sat the math paper for China's college entrance exam has failed to prove it's better than its human competition. AI-Maths, a machine made of 11 servers, three years in the making, joined almost 10 million high schoolers last week, in ...
AI robot needs to understand more Chinese to boost math scoreecns

all 4 news articles »

Original post:

Even an AI machine couldn't ace China's super tough college entrance exam - Mashable

DigestAIs 19-year-old founder wants to make education addictive – TechCrunch

When Quddus Pativada was 14, he wished that he had an app that could summarize his textbooks for him. Just five years later, Pativada has been there and done that earlier this year, he launched the AI-based app Kado, which turns photos, documents or PDFs into flash cards. Now, as the 19-year-old founder takes the stage for Startup Battlefield, hes looking to take his company, DigestAI, beyond flashcards to create an AI dialogue assistant that we can all carry around on our phones.

If we make learning truly easy and accessible, its something you could do as soon as you open your phone, Pativada told TechCrunch. We want to put a teacher in every single persons phone for every topic in the world.

Quddus Pativada, founder at DigestAI pitches as part of TechCrunch Startup Battlefield at TechCrunch Disrupt in San Francisco on October 18, 2022. Image Credits: Haje Kamps / TechCrunch

The companys AI is trained on data from the internet, but the algorithm is fine-tuned to recall specific use cases to make sure that its responses are accurate and not too thrown off by online chaos.

We train it on everything, but the actual use cases are called within silos. Were calling it federated learning, where its sort of siloed in and language models are operating on a use case basis, Pativada said. This is good because it avoids malicious use.

Pativada said that this kind of product would be different from smart assistants like Apples Siri or Amazons Alexa because the information it provides would be more personalized and detailed. So, for certain use cases, like asking for sources to use in an essay, the AI will pull from academic journals to make sure that the information is accurate and appropriate for a classroom.

Despite running an educational AI startup, Pativada isnt currently in school. He took a gap year before going to college to work on his startup, but as DigestAI took off, he decided to keep building instead of going back to school. Growing up, he taught himself to code because he loved video games, so he wanted to make his own by age 10, he published a Flappy Bird clone on the App Store. Naturally, his technological ambitions matured a bit over time. Before founding DigestAI, Pativada built a COVID-19 contact tracing platform. At first, he just made the app as a tool for his classmates but his work ended up being honored by the United Arab Emirates government.

Image Credits: DigestAI

So far, the outlook is good for the Dubai-based company. Pativada who says he feels skittish about the CEO label, and prefers to think of himself as just a founder has raised $600,000 so far from angel investors like Mark Cuban and Shaan Patel, who struck a deal on Shark Tank for his SAT prep company, Prep Expert.

How does a 19-year-old in Dubai capture the attention of one of thee most well-known startup investors? A cold email. Mark, we apologize if this admission makes your inbox even more nightmarish.

I was watching a GQ video of Mark Cubans daily routine, Pativada said. He said he reads his emails every morning at 9 AM, and I looked at the time in Dallas, and it was about 9 AM. So I was like, maybe I should just shoot him an email and see what happens. While he was at it, he reached out to Patel, whose educational startup has done over $20 million in sales. Patel hopped on a video call with the teenage founder, and by the next week, he and Cuban both offered to invest in DigestAI.

We raised our entire round through cold emails and Zoom, Pativada told TechCrunch. It sort of helped because no one can see how young I look in person.

Before he decided to eschew college altogether, Pativada applied to Stanford and interviewed with an alumnus, as is standard in the admissions process. He didnt end up getting into the competitive Palo Alto university, but his interviewer, who works at Stanford, did end up investing in his company. Go figure.

Our goal is to work with universities like Stanford, Pativada said. The company is also targeting enterprise clients. Currently, DigestAI works with some U.S.-based universities, Bocconi University in Italy, a European law firm and other clients. At the law firm, DigestAI is testing a tool that allows associates to text a WhatsApp number to quickly brush up on legal terms.

In the long term, DigestAI wants to create an SMS system where people can text the AI asking for help learning something he wants information to be so accessible that its addictive.

That is what AI is its almost the best version of a human being, Pativada said.

View original post here:

DigestAIs 19-year-old founder wants to make education addictive - TechCrunch

This little USB stick is designed to make AI plug-and-play – The Verge – The Verge

Step by step, artificial intelligence is moving down from the cloud and into the device in your hand. The latest sign? This unassuming little thumb drive from chipmaker Movidius, which packs one of the companys machine vision processors the same chip used by DJI for its autonomous drones into a plug-and-play USB stick. If manufacturers want to beef up the AI capabilities of their new product, all they need to do is plug in one of these.

The Movidius Neural Compute Stick was actually announced last April as a prototype device called the Fathom. But then Intel came acalling, and bought Movidius in September that year for an undisclosed amount. In all the work and confusion that comes with any sale like that, the Fathom got put on hold. Now though, its back.

From a technical point of view, the new Compute Stick is the same as the old one. At its heart is a Myriad 2 Vision Processing Unit or VPU a low-power processor (it consumes just a single watt) that uses twelve parallel cores to run vision algorithms like object detection and facial recognition. Movidius says it delivers more than 100 gigaflops of performance, and can natively run neural networks built using the Caffe framework. (Caffe is one of the neural network libraries around, but its not clear if the Compute Stick will also work with Googles popular TensorFlow framework.) For more details, you can check out the full spec sheet for the Myriad 2 here.

The main changes in this new version are that its made out of aluminum instead of plastic, and the price has been cut from a putative $99 for the original, to $79. Movidius says Intels involvement helped push this price down.

But who will use the Neural Compute Stick? Well, itll come in handy for a few different groups. AI researchers will be able to use the stick as an accelerator plugging it in to their computers to get a little more local power when training and designing new neural nets. (Movidius notes that you can also chain multiple sticks together, boosting the performance linearly with each one you add). Companies looking to put AI powers in a physical product will also benefit, with the USB-compatible stick giving them an easy and fast way to execute neural networks locally.

But of course, a device like this certainly has its limitations. For a company building, say, an AI-powered security camera, there will be more efficient ways to incorporate specialized vision processors in their product, especially if theyre manufacturing at scale. And for a researcher training new neural nets, buying the latest graphic cards or renting processing power in the cloud will offer quicker results. Itll just be more expensive too.

What a device like the Neural Compute Stick does well, is fill a gap in the market. And, in doing so, it make artificial intelligence that little bit more accessible.

See original here:

This little USB stick is designed to make AI plug-and-play - The Verge - The Verge

5 Ways IBM Predicts AI and Ad Tech Will Evolve in 2021 – Adweek

With tech giants set to crack down on cookies and third-party trackers in the coming months, the ad-tech industry is in for some major changes.

IBM Watson Advertising has bet that artificial intelligence and anonymized behavioral insights will play a central role in that post-cookie future. The company has rolled out a series of product releases this year that aimed to lessen marketers reliance on personal data.

In a new report this week, Sheri Bachstein, global head of IBM Watson Advertising and The Weather Company, laid out some predictions for how those changes may take shape in the year to come, from a ramping up of discussions around consumer privacy to what a post-Covid-19 new normal might look like.

Bachstein expects pushes for more data privacy policies like the European Unions General Data Protection Regulations and Californias Prop 24, which modifies its existing Consumer Privacy Act, to intensify in the coming year. These efforts could create a patchwork of state-by-state regulations that might make it difficult for some companies to scale.

To avoid that situation, IBM is calling on the industry to join in advocating for federal legislation that would standardize rules across the board. This effort should be collaborative and include viewpoints from a variety of industry partners, councils and big technology brands to ensure legislation works across the entire ecosystem, Bachstein said.

Partly as a result of legislative pushes, consumers will likely have more transparency into what data is being collected on them and how its being used, Bachstein predicts. But consumers will also continue to expect personalized experiences, meaning that there will still be a market for targeting.

Marketers are operating under conditions that are unique to the current state of the pandemic, and Bachstein expects many of those changes to revert next year as the world eventually begins to reopen. While some virtual formats like video conferencing platforms and augmented reality will likely see lasting effects, other trendslike a growth in desktop performance over mobilewill return to the overarching trajectories of the years before the pandemic.

There are going to be some user behaviors that may stick around, Bachstein told Adweek. But as people start going back to work, some of the digital behaviors that were seeing will likely return to normal.

Meanwhile, industries will likely recover from the economic turmoil at different paces. Industries such as travel, publishing and advertising, for instance, may be slower to bounce back from the devastation.

The Trade Desk recently struck a series of major partnerships in the ad-tech industry for its Unified ID 2.0 initiative, which seeks to use encryption to create a standardized replacement for third-party cookies. IBM believes that collaborative efforts like these are a step in the right direction, but ultimately wont make up for the capabilities that will be lost with the end of third-party tracking.

Bachstein maintains that the shift to reliance on AI-gleaned consumer insights will ultimately be as transformative for the ad-tech industry as the transition to programmatic was a decade ago. But the company stresses that adoption will take time, and that consumers and business clients still dont fully understand the ins and outs of what the technology can do.

When programmatic came on the scene 10 years ago, it took a while for everyone to really adopt it. And AI is probably going to be similar in that some people will be early adopters of it, Bachstein said. But it is going to take education. Weve got to take AI and make it not a buzzword anymore, but put it into practice to get results.

Read the original here:

5 Ways IBM Predicts AI and Ad Tech Will Evolve in 2021 - Adweek

Quick-Thinking AI Camera Mimics the Human Brain – Scientific American

Researchers in Europe are developing a camera that will literally have a mind of its own, with brainlike algorithms that process images and light sensors that mimic the human retina. Its makers hope it will prove that artificial intelligencewhich today requires large, sophisticated computerscan soon be packed into small consumer electronics. But as much as an AI camera would make a nifty smartphone feature, the technologys biggest impact may actually be speeding up the way self-driving cars and autonomous flying drones sense and react to their surroundings.

The conventional digital cameras used in self-driving and computer-assisted cars and drones as well as in surveillance devices capture a lot of extraneous information that eats up precious memory space and battery life. Much of that data is repetitive because the scene the camera is watching does not change much from frame to frame. The new AI camera, called an ultralow-power event-based camera, or ULPEC, will have pixel sensors that come to life only when the camera is ready to record a new image or event. That memory- and power-saving feature will not slow performancethe camera will also have new electrical components that allow it to react to changing light or movement in a scene within microseconds (millionths of a second), compared with milliseconds (thousandths) in todays digital cameras, says Ryad Benosman, a professor at the University Pierre and Marie Curie who leads the Vision and Natural Computation group at the Paris-based Vision Institute. It records only when the light striking the pixel sensors crosses a preset threshold amount, says Benosman, whose team is developing the learning algorithms for an artificial neural network that serves as the cameras brain. An artificial neural network is a group of interconnected computers configured to work like a system of flesh-and-blood neurons in the human brain. The interconnections among the computers enable the network to find patterns in data fed into the system, and to filter out extraneous information via a process called machine learning. Such a network does away with not only acquiring but also processing irrelevant information, thus making the camera faster and requiring lower power for computation, Benosman says.

The AI camera's photo sensorsits eyeswill consist of tiny pieces of semiconductors and circuitry on silicon, which turn changes in light into electrical signals sent to the neural network. Integrated circuits and a new type of electronic component called a memory resistor or memristor, acting as the equivalent of synaptic connections, will process the information in those signals, says Sren Boyn, a researcher at the Zurich-based Swiss Federal Institute of Technology who worked with the CNRS-Thales joint research unit that is now working with Benosmans team. One of the biggest challenges to that approach is that memristor technologyfirst theorized in 1971 by University of California, Berkeley, professor emeritus Leon Chua (pdf) and later mathematically modeled by HewlettPackard Labs researchers in 2008is still largely in the development stage, which would explain why for the ULPEC project is not expected to have a working device until 2020.

The AI cameras memristors will consist of a thin layer of a ferroelectric materialbismuth ferritesandwiched between two electrodes, says Vincent Garcia, a research scientist at French scientific research agency CNRS/Thales, which is developing the ULPEC memristor. Ferroelectric materials have positive and negative sidesbut applying voltage reverses those charges. Thus, the resistance of memristors can be tuned using voltage, Garcia explains. Similar to our brains learning ability that is dependent on the stimulation of synapses, which serve as connections between our neurons, this tunable resistance helps in making the network learn. The more the synapse is stimulated, the more the connection is reinforced and learning improved.

The combination of bio-inspired optical sensors and neural networks will make the camera an especially good fit for self-driving cars and autonomous drones, says Christoph Posch, chief technology officer of the Paris-based start-up Chronocam, which is designing the cameras optical sensors. In self-driving cars the onboard computer must react to changes very quickly while navigating through traffic or determining the movement of pedestrians, Posch explains. The ULPEC can detect and process these changes rapidly. German automotive equipment manufacturer Boschalso involved in the projectwill investigate how the camera might be used as part of its autonomous and computer-aided driving technology.

The researchers plan to place 20,000 memristors on the AI cameras microchip, says Sylvain Saighi, an associate professor of electronics at the University of Bordeaux and head of the $5.57-million ULPEC project.

Getting all of the components of a memristor neural network onto a single microchip would be a big step, says Yoeri van de Burgt, an assistant professor of microsystems at Eindhoven University of Technology in the Netherlands, whose research includes building artificial synapses. Since it is performing the computation locally, it will be more secure and can be dedicated for specific tasks like cameras in drones and self-driving cars, adds van de Burgt, who was not involved in the ULPEC project.

Assuming the researchers can pull it off, such a chip would be useful well beyond smart cameras because it would be able to perform a variety of complicated computations itself, rather than off-loading that work to a supercomputer via the cloud. In this way, Posch says, the camera is an important step toward determining whether the underlying memristors and other technology will work, and how they might be integrated into future consumer devices. The camera, with its innovative sensors and memristor neural network, could demonstrate that AI can be built into a device in order to make it both smart and more energy efficient.

More here:

Quick-Thinking AI Camera Mimics the Human Brain - Scientific American