Facebook to ban deepfake videos created with artificial intelligence technology – TVNZ

Facebook says it is banning deepfake videos, the false but realistic clips created with artificial intelligence and sophisticated tools, as it steps up efforts to fight online manipulation. But the policy leaves plenty of loopholes.

Facebook logo Source: Associated Press

The social network said yesterday that it's beefing up its policies for removing videos edited or synthesised in ways that aren't apparent to the average person, and which could dupe someone into thinking the video's subject said something he or she didn't actually say.

Created by artificial intelligence or machine learning, deepfakes combine or replace content to create images that can be almost impossible to tell are not authentic.

While these videos are still rare on the internet, they present a significant challenge for our industry and society as their use increases, Facebook's vice president of global policy management, Monika Bickert, said in a blog post.

However, she said the new rules won't include parody or satire, or clips edited just to change the order of words. The exceptions underscore the balancing act Facebook and other social media services face in their struggle to stop the spread of online misinformation and "fake news," while also respecting free speech and fending off allegations of censorship.

The US tech company has been grappling with how to handle the rise of deepfakes after facing criticism last year for refusing to remove a doctored video of House Speaker Nancy Pelosi slurring her words, which was viewed more than 3 million times. Experts said the crudely edited clip was more of a cheap fake than a deepfake.

Then, a pair of artists posted fake footage of Facebook CEO Mark Zuckerberg showing him gloating over his one-man domination of the world. Facebook also left that clip online. The company said at the time that neither video violated its policies.

The problem of altered videos is taking on increasing urgency as experts and lawmakers try to figure out how to prevent deepfakes from being used to interfere with the U.S. presidential election in November.

Your playlist will load after this ad

The technology is called "deepfake" and it's already causing major problems overseas with experts fearing itll do the same here. Source: 1 NEWS

The new policy is a strong starting point," but doesn't address broader problems, said Sam Gregory, program director at Witness, a nonprofit working on using video technology for human rights.

The reality is there arent that many political deepfakes at the moment," he said. "They're mainly nonconsensual sexual images.

The bigger problem is videos that are either shown without context or lightly edited, which some have dubbed shallow fakes, Gregory said.

These include the Pelosi clip or one that made the rounds last week of Democratic presidential candidate Joe Biden that was selectively edited to make it appear he made racist remarks.

Gregory, whose group was among those that gave feedback to Facebook for the policy, said that while the new rules look strong on paper, there are questions around how effective the company will be at uncovering synthetic videos.

Facebook has built deepfake-detecting algorithms and can also look at an account's behavior to get an idea of whether it's intention is to spread disinformation. That will give the company an edge over users or journalists in sniffing them out, Gregory said.

But those algorithms haven't been used widely for deepfakes in the wild. "So it is an open question how effective detection will be," he said. This is an algorithmic kind of game of cat and mouse, where the forgeries will get better alongside the detection.

Facebook said any videos, deepfake or not, will also be removed if they violate existing standards for nudity, graphic violence or hate speech.

Those that aren't removed can still be reviewed by independent third-party fact-checkers and any deemed false will be flagged as such to people trying to share or view them, which Bickert said was a better approach than just taking them down.

If we simply removed all manipulated videos flagged by fact-checkers as false, the videos would still be available elsewhere on the internet or social media ecosystem, Bickert said. By leaving them up and labeling them as false, were providing people with important information and context.

Twitter, which has been another hotbed for misinformation and altered videos, said it's in the process of creating a policy for synthetic and manipulated media, which would include deepfakes and other doctored videos. The company has asked for public feedback on the issue.

The responses it's considering include putting a notice next to tweets that include manipulated material. The tweets might also be removed if they're misleading and could cause serious harm to someone.

YouTube, meanwhile, has a policy against deceptive practices that the company says includes the deceptive uses of manipulated media" that may pose serious risk of harm. For instance, the company removed the Pelosi video last year. Google, which owns YouTube, is also researching how to better detect deepfakes and other manipulated media.

More:
Facebook to ban deepfake videos created with artificial intelligence technology - TVNZ

LeaVoice an Artificial Intelligence made to reduce your stress and anxiety will showcase at CES2020 – PRUnderground

HoloAsh is developing an AI friend to reduce your stress. Yoshua Kishi, an entrepreneur diagnosed with ADHD, founded the startup to create a safe space for people who dont fit in a society organized for the average.

Recently, HoloAsh got accepted into 500Startups in Kobe and will be launching its new product, LeaVoice, at the upcoming CES2020 event. The showcase will be at CES2020 Las Vegas Convention Center between Jan 7 Jan 10. For more information, please visit: https://bit.ly/leavoicecom

About LeaVoice:LeaVoice is a voice chat with an AI friend for stressed and anxious people. If youre in a bad mood and need someone to talk to, your AI friend is always here. LeaVoice is judgment-free, and available 24/7.Sometimes, you feel like no one cares about you, that you dont fit in. Many social services provide basic communication but wont relieve you of these feelings. Chatbots are one-dimensional. But LeaVoice is listening to you 24/7, analyzing your voice to detect emotions that a text chat cannot convey. said Yoshua Kishi, CEO of HoloAsh.

About HoloAshHoloAsh was founded in 2018, creating an AI friend for people with stress and anxiety. Founded by Serial entrepreneur with Former Amazon Data Scientist, NLG specialist, and former Stanford Researcher. Unlike current chatbot, our AI friend will detect emotion from the sound of someones voice to provide the best possible response. Weve got accepted into 500 startups in Kobe and Plug and Play Kyoto as well.

Additional Information:Website: https://holoash.com/Press Kit download: https://brand.sparkamplify.com/holoashFacebook: https://www.facebook.com/holoash/Twitter: https://twitter.com/LeaVoice_AI

Media contact: Yoshua Kishi, Marysia RomaszkanEmail: yoshua@holoash.com, marysia@holoash.com

Disclaimer: This product is not intended to diagnose, treat, or prevent any disease. The information on this website or in emails is designed for educational purposes only. It is not intended to be a substitute for informed medical advice or care. Please consult a doctor with any questions or concerns you may have regarding your medical health. The news site hosting this press release is not associated with LeaVoice or HoloAsh. It is merely publishing a press release announcement submitted by a company, without any stated or implied endorsement of the person, product or service.

About SparkAmplify Distribution

SparkAmplify was founded in 2016. It is a SaaS company based in California and Taipei, specializing in media outreach and influencer engagement. The team consists of a group of passionate data scientists, engineers, designers and marketers looking to reshape digital marketing via machine learning and influencer social network analysis. SparkAmplify was selected as one of the Top 50 startups among 6,000+ startups from 80 countries at the 2017 Startup Grind Global Conference and a Top 100 startup at the Echelon 2019 Asia Summit.

Read more:
LeaVoice an Artificial Intelligence made to reduce your stress and anxiety will showcase at CES2020 - PRUnderground

Why Neuro-Symbolic Artificial Intelligence Is The AI Of The Future – Digital Trends

Picture a tray. On the tray is an assortment of shapes: Some cubes, others spheres. The shapes are made from a variety of different materials and represent an assortment of sizes. In total there are, perhaps, eight objects. My question: Looking at the objects, are there an equal number of large things and metal spheres?

Its not a trick question. The fact that it sounds as if it is is proof positive of just how simple it actually is. Its the kind of question that a preschooler could most likely answer with ease. But its next to impossible for todays state-of-the-art neural networks. This needs to change. And it needs to happen by reinventing artificial intelligence as we know it.

Thats not my opinion; its the opinion of David Cox, director of the MIT-IBM Watson A.I. Lab in Cambridge, MA. In a previous life, Cox was a professor at Harvard University, where his team used insights from neuroscience to help build better brain-inspired machine learning computer systems. In his current role at IBM, he oversees work on the companys Watson A.I. platform.Watson, for those who dont know, was the A.I. which famously defeated two of the top game show players in history at TV quiz show Jeopardy. Watson also happens to be a primarily machine-learning system, trained using masses of data as opposed to human-derived rules.

So when Cox says that the world needs to rethink A.I. as it heads into a new decade, it sounds kind of strange. After all, the 2010s has been arguably the most successful ten-year in A.I. history: A period in which breakthroughs happen seemingly weekly, and with no frosty hint of an A.I. winter in sight.This is exactly why he thinks A.I. needs to change, however. And his suggestion for that change, a currently obscure term called neuro-symbolic A.I., could well become one of those phrases were intimately acquainted with by the time the 2020s come to an end.

Neuro-symbolic A.I. is not, strictly speaking, a totally new way of doing A.I. Its a combination of two existing approaches to building thinking machines; ones which were once pitted against each as mortal enemies.

The symbolic part of the name refers to the first mainstream approach to creating artificial intelligence. From the 1950s through the 1980s, symbolic A.I. ruled supreme. To a symbolic A.I. researcher, intelligence is based on humans ability to understand the world around them by forming internal symbolic representations. They then create rules for dealing with these concepts, and these rules can be formalized in a way that captures everyday knowledge.

If the brain is analogous to a computer, this means that every situation we encounter relies on us running an internal computer program which explains, step by step, how to carry out an operation, based entirely on logic. Provided that this is the case, symbolic A.I. researchers believe that those same rules about the organization of the world could be discovered and then codified, in the form of an algorithm, for a computer to carry out.

Symbolic A.I. resulted in some pretty impressive demonstrations. For example, in 1964 the computer scientist Bertram Raphael developed a system called SIR, standing for Semantic Information Retrieval. SIR was a computational reasoning system that was seemingly able to learn relationships between objects in a way that resembled real intelligence. If you were to tell it that, for instance, John is a boy; a boy is a person; a person has two hands; a hand has five fingers, then SIR would answer the question How many fingers does John have? with the correct number 10.

there are concerning cracks in the wall that are starting to show.

Computer systems based on symbolic A.I. hit the height of their powers (and their decline) in the 1980s. This was the decade of the so-called expert system which attempted to use rule-based systems to solve real-world problems, such as helping organic chemists identify unknown organic molecules or assisting doctors in recommending the right dose of antibiotics for infections.

The underlying concept of these expert systems was solid. But they had problems. The systems were expensive, required constant updating, and, worst of all, could actually become less accurate the more rules were incorporated.

The neuro part of neuro-symbolic A.I. refers to deep learning neural networks. Neural nets are the brain-inspired type of computation which has driven many of the A.I. breakthroughs seen over the past decade. A.I. that can drive cars? Neural nets. A.I. which can translate text into dozens of different languages? Neural nets. A.I. which helps the smart speaker in your home to understand your voice? Neural nets are the technology to thank.

Neural networks work differently to symbolic A.I. because theyre data-driven, rather than rule-based. To explain something to a symbolic A.I. system means explicitly providing it with every bit of information it needs to be able to make a correct identification. As an analogy, imagine sending someone to pick up your mom from the bus station, but having to describe her by providing a set of rules that would let your friend pick her out from the crowd. To train a neural network to do it, you simply show it thousands of pictures of the object in question. Once it gets smart enough, not only will it be able to recognize that object; it can make up its own similar objects that have never actually existed in the real world.

For sure, deep learning has enabled amazing advances, David Cox told Digital Trends. At the same time, there are concerning cracks in the wall that are starting to show.

One of these so-called cracks relies on exactly the thing that has made todays neural networks so powerful: data. Just like a human, a neural network learns based on examples. But while a human might only need to see one or two training examples of an object to remember it correctly, an A.I. will require many, many more. Accuracy depends on having large amounts of annotated data with which it can learn each new task.

That makes them less good at statistically rare black swan problems. A black swan event, popularized by Nassim Nicholas Taleb, is a corner case that is statistically rare. Many of our deep learning solutions today as amazing as they are are kind of 80-20 solutions, Cox continued. Theyll get 80% of cases right, but if those corner cases matter, theyll tend to fall down. If you see an object that doesnt normally belong [in a certain place], or an object at an orientation thats slightly weird, even amazing systems will fall down.

Before he joined IBM, Cox co-founded a company, Perceptive Automata, that developed software for self-driving cars. The team had a Slack channel in which they posted funny images they had stumbled across during the course of data collection. One of them, taken at an intersection, showed a traffic light on fire. Its one of those cases that you might never see in your lifetime, Cox said. I dont know if Waymo and Tesla have images of traffic lights on fire in the datasets they use to train their neural networks, but Im willing to bet if they have any, theyll only have a very few.

Its one thing for a corner case to be something thats insignificant because it rarely happens and doesnt matter all that much when it does. Getting a bad restaurant recommendation might not be ideal, but its probably not going to be enough to even ruin your day. So long as the previous 99 recommendations the system made are good, theres no real cause for frustration. A self-driving car failing to respond properly at an intersection because of a burning traffic light or a horse-drawn carriage could do a lot more than ruin your day. It might be unlikely to happen, but if it does we want to know that the system is designed to be able to cope with it.

If you have the ability to reason and extrapolate beyond what weve seen before, we can deal with these scenarios, Cox explained. We know that humans can do that. If I see a traffic light on fire, I can bring a lot of knowledge to bear. I know, for example, that the light is not going to tell me whether I should stop or go. I know I need to be careful because [drivers around me will be confused.] I know that drivers coming the other way may be behaving differently because their light might be working. I can reason a plan of action that will take me where I need to go. In those kinds of safety-critical, mission-critical settings, thats somewhere I dont think that deep learning is serving us perfectly well yet. Thats why we need additional solutions.

The idea of neuro-symbolic A.I. is to bring together these approaches to combine both learning and logic. Neural networks will help make symbolic A.I. systems smarter by breaking the world into symbols, rather than relying on human programmers to do it for them. Meanwhile, symbolic A.I. algorithms will help incorporate common sense reasoning and domain knowledge into deep learning. The results could lead to significant advances in A.I. systems tackling complex tasks, relating to everything from self-driving cars to natural language processing. And all while requiring much less data for training.

Neural networks and symbolic ideas are really wonderfully complementary to each other, Cox said. Because neural networks give you the answers for getting from the messiness of the real world to a symbolic representation of the world, finding all the correlations within images. Once youve got that symbolic representation, you can do some pretty magical things in terms of reasoning.

For instance, in the shape example I started this article with, a neuro-symbolic system would use a neural networks pattern recognition capabilities to identify objects. Then it would rely on symbolic A.I. to apply logic and semantic reasoning to uncover new relationships. Such systems have already been proven to work effectively.

Its not just corner cases where this would be useful, either. Increasingly, it is important that A.I. systems are explainable when required. A neural network can carry out certain tasks exceptionally well, but much of its inner reasoning is black boxed, rendered inscrutable to those who want to know how it made its decision. Again, this doesnt matter so much if its a bot that recommends the wrong track on Spotify. But if youve been denied a bank loan, rejected from a job application, or someone has been injured in an incident involving an autonomous car, youd better be able to explain why certain recommendations have been made. Thats where neuro-symbolic A.I. could come in.

A few decades ago, the worlds of symbolic A.I. and neural networks were at odds with one another. The renowned figures who championed the approaches not only believed that their approach was right; they believed that this meant the other approach was wrong. They werent necessarily incorrect to do so. Competing to solve the same problems, and with limited funding to go around, both schools of A.I. appeared fundamentally opposed to each other. Today, it seems like the opposite could turn out to be true.

Its really fascinating to see the younger generation, Cox said. [Many of the people in my team are] relatively junior people: fresh, excited, fairly recently out of their Ph.Ds. They just dont have any of that history. They just dont care [about the two approaches being pitted against each other] and not caring is really powerful because it opens you up and gets rid of those prejudices. Theyre happy to explore intersections They just want to do something cool with A.I.

Should all go according to plan, all of us will benefit from the results.

See the original post here:
Why Neuro-Symbolic Artificial Intelligence Is The AI Of The Future - Digital Trends

The Top 10 Artificial Intelligence Trends Everyone Should Be Watching In 2020 – Forbes

Artificial Intelligence (AI) has undoubtedly been the technology story of the 2010s, and it doesn't look like the excitement is going to wear off as a new decade dawns.

The past decade will be remembered as the time when machines that can truly be thought of as intelligent as in capable of thinking, and learning, like we do started to become a reality outside of science fiction.

While no prediction engine has yet been built that can plot the course of AI over the coming decade, we can be fairly certain about what might happen over the next year. Spending on research, development, and deployment continues to rise, and debate over the wider social implications rages on. Meanwhile, the incentives only get bigger for those looking to roll out AI-driven innovation into new areas of industry, fields of science, and our day-to-day lives.

The Top 10 Artificial Intelligence Trends Everyone Should Be Watching In 2020

Here are my predictions for what we're likely to see continue or emerge in the first year of the 2020s.

1. AI will increasingly be monitoring and refining business processes

While the first robots in the workplace were mainly involved with automating manual tasks such as manufacturing and production lines, today's software-based robots will take on the repetitive but necessary work that we carry out on computers. Filling in forms, generating reports and diagrams and producing documentation and instructions are all tasks that can be automated by machines that watch what we do and learn to do it for us in a quicker and more streamlined manner. This automation known as robotic process automation will free us from the drudgery of time-consuming but essential administrative work, leaving us to spend more time on complex, strategic, creative and interpersonal tasks.

2. More and more personalization will take place in real-time

This trend is driven by the success of internet giants like Amazon, Alibaba, and Google, and their ability to deliver personalized experiences and recommendations. AI allows providers of goods and services to quickly and accurately project a 360-degree view of customers in real-time as they interact through online portals and mobile apps, quickly learning how their predictions can fit our wants and needs with ever-increasing accuracy. Just as pizza delivery companies like Dominos will learn when we are most likely to want pizza, and make sure the "Order Now" button is in front of us at the right time, every other industry will roll out solutions aimed at offering personalized customer experiences at scale.

3. AI becomes increasingly useful as data becomes more accurate and available

The quality of information available is often a barrier to businesses and organizations wanting to move towards AI-driven automated decision-making. But as technology and methods of simulating real-world processes and mechanisms in the digital domain have improved over recent years, accurate data has become increasingly available. Simulations have advanced to the stage where car manufacturers and others working on the development of autonomous vehicles can gain thousands of hours of driving data without vehicles even leaving the lab, leading to huge reductions in cost as well as increases in the quality of data that can be gathered. Why risk the expense and danger of testing AI systems in the real world when computers are now powerful enough, and trained on accurate-enough data, to simulate it all in the digital world? 2020 will see an increase in the accuracy and availability of real-world simulations, which in turn will lead to more powerful and accurate AI.

4. More devices will run AI-powered technology

As the hardware and expertise needed to deploy AI become cheaper and more available, we will start to see it used in an increasing number of tools, gadgets, and devices. In 2019 were already used to running apps that give us AI-powered predictions on our computers, phones, and watches. As the next decade approaches and the cost of hardware and software continues to fall, AI tools will increasingly be embedded into our vehicles, household appliances, and workplace tools. Augmented by technology such as virtual and augmented reality displays, and paradigms like the cloud and Internet of Things, the next year will see more and more devices of every shape and size starting to think and learn for themselves.

5. Human and AI cooperation increases

More and more of us will get used to the idea of working alongside AI-powered tools and bots in our day-to-day working lives. Increasingly, tools will be built that allow us to make the most of our human skills those which AI can't quite manage yet such as imaginative, design, strategy, and communication skills. While augmenting them with super-fast analytics abilities fed by vast datasets that are updated in real-time.

For many of us, this will mean learning new skills, or at least new ways to use our skills alongside these new robotic and software-based tools. The IDC predicts that by 2025, 75% of organizations will be investing in employee retraining in order to fill skill gaps caused by the need to adopt AI. This trend will become increasingly apparent throughout 2020, to the point where if your employer isnt investing in AI tools and training, it might be worth considering how well placed they are to grow over the coming years.

6. AI increasingly at the edge

Much of the AI were used to interacting with now in our day-to-day lives takes place in the cloud when we search on Google or flick through recommendations on Netflix, the complex, data-driven algorithms run on high-powered processors inside remote data centers, with the devices in our hands or on our desktops simply acting as conduits for information to pass through.

Increasingly, however, as these algorithms become more efficient and capable of running on low-power devices, AI is taking place at the "edge," close to the point where data is gathered and used. This paradigm will continue to become more popular in 2020 and beyond, making AI-powered insights a reality outside of the times and places where super-fast fiber optic and mobile networks are available. Custom processors designed to carry out real-time analytics on-the-fly will increasingly become part of the technology we interact with day-to-day, and increasingly we will be able to do this even if we have patchy or non-existent internet connections.

7. AI increasingly used to create films, music, and games

Some things, even in 2020, are probably still best left to humans. Anyone who has seen the current state-of-the-art in AI-generated music, poetry or storytelling is likely to agree that the most sophisticated machines still have some way to go until their output will be as enjoyable to us as the best that humans can produce. However, the influence of AI on entertainment media is likely to increase. This year we saw Robert De Niro de-aged in front of our eyes with the assistance of AI, in Martin Scorseses epic The Irishman, and the use of AI in creating brand new visual effects and trickery is likely to become increasingly common.

In videogames, AI will continue to be used to create challenging, human-like opponents for players to compete against, as well as to dynamically adjust gameplay and difficulty so that games can continue to offer a compelling challenge for gamers of all skill levels. And while completely AI-generated music may not be everyones cup of tea, where AI does excel is in creating dynamic soundscapes think of smart playlists on services like Spotify or Google Music that match tunes and tempo to the mood and pace of our everyday lives.

8. AI will become ever more present in cybersecurity

As hacking, phishing and social engineering attacks become ever-more sophisticated, and themselves powered by AI and advanced prediction algorithms, smart technology will play an increasingly important role in protecting us from these attempted intrusions into our lives. AI can be used to spot giveaway signs that digital activity or transactions follow patterns that are likely to be indicators of nefarious activity, and raise alarms before defenses can be breached and sensitive data compromised.

The rollout of 5G and other super-fast wireless communications technology will bring huge opportunities for businesses to provide services in new and innovative ways, but they will also potentially open us up to more sophisticated cyber-attacks. Spending on cybersecurity will continue to increase, and those with relevant skills will be highly sought-after.

9. More of us will interact with AI, maybe without even knowing it

Lets face it, despite the huge investment in recent years in natural-language powered chatbots in customer service, most of us can recognize whether were dealing with a robot or a human. However, as the datasets used to train natural language processing algorithms continue to grow, the line between humans and machines will become harder and harder to distinguish. With the advent of deep learning and semi-supervised models of machine learning such as reinforcement learning, the algorithms that attempt to match our speech patterns and infer meaning from our own human language will become more and more able to fool us into thinking there is a human on the other end of the conversation. And while many of us may think we would rather deal with a human when looking for information or assistance, if robots fill their promise of becoming more efficient and accurate at interpreting our questions, that could change. Given the ongoing investment and maturation of the technology powering customer service bots and portals, 2020 could be the first time many of us interact with a robot without even realizing it.

10. But AI will recognize us, even if we dont recognize it

Perhaps even more unsettlingly, the rollout of facial recognition technology is only likely to intensify as we move into the next decade. Not just in China (where the government is looking at ways of making facial recognition compulsory for accessing services like communication networks and public transport) but around the world. Corporations and governments are increasingly investing in these methods of telling who we are and interpreting our activity and behavior. There's some pushback against this this year, San Francisco became the first major city to ban the use of facial recognition technology by the police and municipal agencies, and others are likely to follow in 2020. But the question of whether people will ultimately begin to accept this intrusion into their lives, in return for the increased security and convenience it will bring, is likely to be a hotly debated topic of the next 12 months.

Here is the original post:
The Top 10 Artificial Intelligence Trends Everyone Should Be Watching In 2020 - Forbes

Artificial Intelligence Software Market to Reach $126.0 Billion in Annual Worldwide Revenue by 2025, According to Tractica – Yahoo Finance

More than 330 AI Use Cases Will Contribute to Market Growth Across 28 Industries with the Strongest Enterprise AI Opportunity in Automotive, Consumer, Financial Services, Telecommunications and Retail Sectors

Artificial intelligence (AI) within the consumer, enterprise, government, and defense sectors is migrating from a conceptual "nice to have" to an essential technology driving improvements in quality, efficiency, and speed. According to a new report from Tractica, the top industry sectors where AI is likely to bring major transformation remain those in which there is a clear business case for incorporating AI, rather than pie-in-the-sky use cases that may not generate return on investment for many years.

"The global AI market is entering a new phase in 2020 where the narrative is shifting from asking whether AI is viable to declaring that AI is now a requirement for most enterprises that are trying to compete on a global level," says principal analyst Keith Kirkpatrick. According to the market intelligence company, AI is likely to thrive in consumer (Internet services), automotive, financial services, telecommunications, and retail industries. Not surprisingly, the consumer sector has demonstrated its ability to capture AI, thanks to the combination of three key factors large data sets, high-performance hardware and state-of-the-art algorithms. Tractica estimates that many of the top enterprise AI verticals will follow and replicate a strategy similar to the consumer Internet companies. Annual global AI software revenue is forecast to grow from $10.1 billion in 2018 to $126.0 billion by 2025.

Tracticas report, "Artificial Intelligence Market Forecasts," provides a quantitative assessment of the market opportunity for AI across the consumer, enterprise, government, and defense sectors. The study includes market sizing, segmentation, and forecasts for 333 AI use cases, including more than 200 unique use cases. Tractica has added use cases spread across multiple industries, including energy, manufacturing, retail, consumer, transportation, public sector, media and entertainment, telecommunications, and financial services. Global market forecasts, segmented by use case, technology, geography, revenue type, and meta category, extend through 2025. An Executive Summary of the report is available for free download on the firms website.

About Tractica

Tractica, an Informa business, is a market intelligence firm that focuses on emerging technologies. Tracticas global market research and consulting services combine qualitative and quantitative research methodologies to provide a comprehensive view of the emerging market opportunities surrounding Artificial Intelligence, Robotics, User Interface Technologies, Advanced Computing and Connected & Autonomous Vehicles. For more information, visit http://www.tractica.com or call +1.303.248.3000.

View source version on businesswire.com: https://www.businesswire.com/news/home/20200106005317/en/

Contacts

Sherril Hanson+1.303.248.3338press@tractica.com

See the original post here:
Artificial Intelligence Software Market to Reach $126.0 Billion in Annual Worldwide Revenue by 2025, According to Tractica - Yahoo Finance

Artificial Intelligence(AI) Applications In Higher Education Business – CIO Applications

If too many applicants accept offers it will be a problem for Facilities Management to fit the class and then there can be a potential resource constraint issue. Financial aid/scholarship is offered to many applicants to entice them to accept the offer and usually, there is a fixed budget for financial aid. As seen above, the admission decision making has to include several moving parts and multiple constraints. This creates a great opportunity to use data science to solve many of these problems. Data science models can be used to predict who should be offered admissions, what is the chance of the admitted person to accept the offer and how much financial aid should be awarded for each potential offer to matriculate.

Universities get several data points for each applicant/ student from beginning of the application cycle, during several years of the program until career placement. These data comprise of academic scores, demographic, academic interaction, performance, placement and many more aspects. All these data can be used comprehensively to review, monitor and advise each student for better academic outcome during the course of the program and also in the future.

Data science models can be used to predict who should be offered admissions, what is the chance of the admitted person to accept the offer and how much financial aid should be awarded for each potential offer to matriculate

Last but not the least; AI application can be used to streamline university financial operations. Machine Learning can be used to facilitate accounting in terms of coding, reconciliation and effective reporting.

Read the original here:
Artificial Intelligence(AI) Applications In Higher Education Business - CIO Applications

Trustworthy artificial intelligence in healthcare: It’s time to deliver – Open Access Government

In Europe, chronic diseases account for 86% of deaths and 77% of disease burden, thereby creating a tremendous challenge on societies. At the same time, digitalisation is bringing huge technological and cultural opportunities. In healthcare, the usage of data-driven forecasts on individual and population health by the integration of artificial intelligence (AI)-enabled algorithms has the potential to revolutionise health protection and chronic care provision, while securing the sustainability of healthcare systems.

European start-ups, small and medium-sized enterprises (SMEs) and large corporations offer smart AI-enabled digital solutions, backed by medical evidence. They could help to achieve the WHO Sustainable Development Goals and reduce premature mortality from major chronic diseases by 25% up to 2025.1 Some, but too few of the solutions are available in the markets. A paradigm example is the increasing availability and reimbursement of closed-loop metabolic control (artificial pancreas) systems for persons with diabetes.2

There is no excuse for inaction as we have evidence-based solutions. This statement was made by the co-chairs of the WHO Independent High-Level Commission on Noncommunicable Diseases in 2018 paired with the appeal on global governments to start from the top in taking action against the global disease burden.3

In April 2019, a high-level expert group on AI set up by the European Commission (EC) published Ethics Guidelines for Trustworthy AI.4 The guidelines lay down criteria for trustworthy AI with emphasis on ethical, legal and social issues and follows the goal to promote trustworthy AI.

First, trustworthy AI should respect all applicable laws and regulations.

Second, trustworthy AI should adhere to the ethical principles of respect for human autonomy (e.g. people should keep full and effective self-determination); prevention of harm (with paying particular attention to vulnerable persons); fairness (e.g. ensuring that people are free from discrimination and stigmatisation and can seek effective redress against AI-enabled decisions); and explicability (e.g. a full traceability, auditability and transparent communication on system capabilities particularly in case of black-box algorithms).

And third, trustworthy AI should be robust and ensure that, even with good intentions, no unintentional harm can occur.

They were inspired by the ethical principles and should be met through developers, deployers and end-users.

First, human agency & oversight (e.g. decision autonomy, governance); second, technical robustness and safety (e.g. security & resilience to attack, accuracy & reproducibility of outcomes); third, privacy & data governance (e.g. data integrity & protection, governance of data access); fourth, transparency (data, systems, business models); fifth, diversity, non-discrimination and fairness (e.g. avoidance of bias, co-creation, user-centrism); sixth, societal and environmental well-being (e.g. promotion of democracy, ecological sustainability); and seventh, accountability (e.g. forecast quality, auditability, report of negative impacts).

Most recently, the guidelines underwent an early reality check through EIT Health a public-private partnership of about 150 best-in-class health innovators backed by the European Union (EU) and collaborating across borders to deliver new solutions that can enable European citizens to live longer, healthier lives.5

A survey among start-ups and entrepreneurs, as well as EIT Health partners from industry, academia and research organisations, indicated a currently low (22% of respondents) have awareness of the guidelines. More than 60% of respondents were aware that their AI application will need regulatory approval.

Among the seven requirements on trustworthy AI, the highest priority was given to privacy & data governance, technical robustness & safety, followed by traceability and human agency & oversight.

Lower ranked, though still relevant, were the ethics of diversity, non-discrimination & fairness (respondents are working on it following e.g. an iterative approach to improving data sets and removing biases), accountability (currently traditional auditing, post-market surveillance and procedures for redress appear to be relied on) and societal and environmental well-being (the former appears self-evident for health solutions, consciousness in the context of health solutions for the latter is possibly not yet well established).

Clearly, there is a conflicting interdependence between a comprehensive resolution of every conceivable ethical, legal & social issue, the imperative to eventually break down the longstanding barriers to personalised and preventative healthcare (which would save millions of lives) and the requirement for European societies to tackle global competition for a worldwide market penetration of trustworthy AI.

We agree with the recent TIME TO DELIVER appeal from the World Health Organization (WHO).3 In collaboration with vital communities, such as EIT Health, European governments should go ahead in establishing a productive balance between promoting innovation, welcoming global competition and defining healthcare-specific ethical, legal and social requirements for trustworthy AI.

We welcome the idea of establishing world reference testing facilities for AI recently contributed by Roberto Viola (Director General of DG Connect at the EC).6 EIT Health should be in a privileged position to orchestrate such testing facilities for AI, through providing secured validation environments applying the high ethical and regulatory standards of clinical contract research.

Here partners from innovation, education and business should collaborate on concrete AI-enabled solutions for an effective assessment of real risks and opportunities, followed by the provision of a solution-specific ELSI7 dossier in order then to join forces to launch the trustworthy AI-enabled solution to the markets and scale-up the business model.

That way, European societies could be impactful in breaking down innovation barriers and eventually providing thoroughly validated solutions globally to the persons who need them most.

References

1 As proclaimed by the WHO in 2012 (Gulland, A. 2013, BMJ 346:f3483; http://www.who.int/nmh/en/)

2 Schliess, F. et al. 2019, J. Diabetes Sci. Technol. 13(2):261-267, 2019; https://www.eithealth.eu/en_US/close; https://www.eithealth.eu/diabeloop-for-children; https://www.eithealth.eu/en_US/d4teens. close, diabeloop-for-children and d4teens are innovation projects dedicated to closing the loop in diabetes care. They are supported by EIT Health, a network of best-inclass health innovators that collaborates across borders and delivers solutions to enable European citizens to live longer, healthier lives. EIT Health is supported by the EIT, a body of the European Union.

3 https://www.who.int/ncds/management/time-to-deliver/en/

4 https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai

5 https://www.eithealth.eu/-/eit-health-holds-panel-on-ai-and-ethics-at-the-world-health-summit

6 https://www.eithealth.eu/ai-and-health

7 ELSI, ethical, legal and social issues.

Please note: This is a commercial profile

Editor's Recommended Articles

Read the original:
Trustworthy artificial intelligence in healthcare: It's time to deliver - Open Access Government

Failure to Scale Artificial Intelligence Could Put 80% of Indian Organizations Out of Business: Study – Analytics India Magazine

Nearly 80% of C-level executives in India believe if they dont move beyond experimentation to aggressively deploy artificial intelligence (AI) across their organizations, they could risk of going out of business by 2025, according to a newly released study from Accenture.

The research, titled AI: Built to Scale and produced by Accenture Strategy and Accenture Applied Intelligence, found that while 79% of C-level executives in India believe they wont achieve their business strategy without scaling AI, only a few have made the shift from mere experimentation to creating an organization powered by robust AI capabilities. As a result, this small group of top performers is achieving nearly three times the return from AI investments as their lower-performing counterparts.

Over the years, the use of AI has permeated various functions demonstrating its true transformative potential. Most companies now also recognize that they need to scale AI for growth and relevance, yet they are unable to do so. Those who push through the barriers to embedding AI more deeply in their organization, are seeing a return on their AI investment of 70% or more, said Anindya Basu, geographic unit and country senior managing director, Accenture in India.

He further added, Indian businesses need to step on the pedal and learn from the leaders. They need to make strategic investments to scale AI as thats the only way to realize its true business value.

The report reveals that the secret to success for top performers centres around three key elements:

These companies demonstrate their deep commitment by scaling AI at

a much higher rate conducting nearly twice as many pilots than other companies. However, this commitment to AI does not necessarily translate to higher spend, with top performers reporting lower investment levels on their AI implementations pilots and full-scale deployments than lower performers.

According to the report, nearly all global respondents, approximately 95% agreed on the importance of data as the foundation to scaling AI, but the top performers are more intentional and focused on ensuring that the right, relevant data assets are in place to underpin their AI efforts. They are more adept at structuring and managing data, with 61% wielding a large, accurate data set and more than two-thirds, i.e. 67%, effectively integrating both internal and external data sets.

A key barrier to the successful scaling of AI is the lack of the right people strategy. Companies need to ensure their employees understand both what AI is and how it applies to their day-to-day role. While the top leadership team can serve as the champions responsible for scaling AI initiatives, embedding teams with AI across the entire organization is not only a powerful signal about the strategic intent of the effort but will also enable faster culture and behavioural changes, said Saurabh Kumar Sahu, managing director and lead for Applied Intelligence, Accenture in India.

This strategic approach is further bolstered by another key characteristic of top performers assembling the right talent to drive results. Instead of relying on a single AI champion, 92% have strategically embedded multi-disciplinary teams throughout their organizations. This cross-functional approach also helps ensure diversity of thinking which, in addition to having tangible benefits for considerations like Responsible AI that can also maximize the value an organization sees from their AI deployments.

comments

Read the rest here:
Failure to Scale Artificial Intelligence Could Put 80% of Indian Organizations Out of Business: Study - Analytics India Magazine

Use of artificial intelligence in different forms can help achieve $5 trillion economy: Piyush Goyal – The New Indian Express

By PTI

NEW DELHI: Use of artificial intelligence (AI) in different forms can help achieve the target of making India a USD 5 trillion economy in the coming years, Commerce and Industry Minister Piyush Goyal said on Monday.

He said various departments are working to see how AI, space technology and other modern tools can be used to push economic growth of the country.

"We in the government believe that AI can, in different forms, help us achieve the USD 5 trillion benchmark, which we have set for over (next) five years," he said here at a function.

The minister added that AI can also help expand in a more cost-effective and outcome-oriented manner.

Goyal, who also has the railways portfolio, said in railways, a team is focusing to see how "we could benefit from AI" as the potential is humongous.

"AI can help in every sector to do our job better," he said adding it can hep improve ease of living and ease of doing business.

Original post:
Use of artificial intelligence in different forms can help achieve $5 trillion economy: Piyush Goyal - The New Indian Express

MIT School of Engineering and Takeda join to advance research in artificial intelligence and health – MIT News

MITs School of Engineering and Takeda Pharmaceuticals Company Limited today announced the MIT-Takeda Program to fuel the development and application of artificial intelligence (AI) capabilities to benefit human health and drug development. Centered within the Abdul Latif Jameel Clinic for Machine Learning in Health (J-Clinic), the new program will leverage the combined expertise of both organizations, and is supported by Takedas three-year investment (with the potential for a two-year extension).

This new collaboration will provide MIT with extraordinary access to pharmaceutical infrastructure and expertise, and will help to focus work on challenges with lasting, practical impact. A new educational program offered through J-Clinic will provide Takeda with the ability to learn from and engage with some of MIT's sharpest and most curious minds, and offer insight into the advances that will help shape the health care industry of tomorrow.

We are thrilled to create this collaboration with Takeda, says Anantha Chandrakasan, dean of MITs School of Engineering. The MIT-Takeda Program will build a community dedicated to the next generation of AI and system-level breakthroughs that aim to advance healthcare around the globe.

The MIT-Takeda Program will support MIT faculty, students, researchers, and staff across the Institute who are working at the intersection of AI and human health, ensuring that they can devote their energies to expanding the limits of knowledge and imagination. The new program will coalesce disparate disciplines, merge theory and practical implementation, combine algorithm and hardware innovations, and create multidimensional collaborations between academia and industry.

We share with MIT a vision where next-generation intelligent technologies can be better developed and applied across the entire health care ecosystem, says Anne Heatherington, senior vice president and head of Data Sciences Institute (DSI) at Takeda. Together, we are creating an incredible opportunity to support research, enhance the drug development process, and build a better future for patients.

Established within J-Clinic, a nexus of AI and health care at MIT, the MIT-Takeda Program will focus on the following offerings:

James Collins will serve as faculty lead for the MIT-Takeda Program. Collins is the Termeer Professor of Medical Engineering and Science in MITs Institute for Medical Engineering and Science (IMES) and Department of Biological Engineering, J-Clinic faculty co-lead, and a member of the Harvard-MIT Health Sciences and Technology faculty. He is also a core founding faculty member of the Wyss Institute for Biologically Inspired Engineering at Harvard University and an Institute Member of the Broad Institute of MIT and Harvard.

A joint steering committee co-chaired by Anantha Chandrakasan and Anne Heatherington will oversee the MIT-Takeda Program.

Read the original post:
MIT School of Engineering and Takeda join to advance research in artificial intelligence and health - MIT News