Page 181«..1020..180181182183..190200..»

Category Archives: Ai

Why AI systems should disclose that they’re not human – Fast Company

Posted: January 31, 2020 at 9:45 am

By Alex C. Engler4 minute Read

We are nearing a new age of ubiquitous AI. Between your smartphone, computer, car, smart home, and social media, you might interact with some sort of automated, intelligent system dozens of times every day. For most of your interactions with AI, it will be obviously and intentionally clear that the text you read, the voice you hear, or the face you see is not a real person. However, other times it will not be so obvious. As automated technologies quickly and methodically climb out of the uncanny valley, customer service calls, website chatbots, and interactions on social media may become progressively less evidently artificial.

This is already happening. In 2018, Google demoed a technology called Duplex, which calls restaurants and hair salons to make appointments on your behalf. At the time, Google faced a backlash for using an automated voice that sounds eerily human, even employing vocal ticks like um, without disclosing its robotic nature. Perversely, todays Duplex has the opposite problem. The automated system does disclose itself, but at least 40% of its calls have humans on the phone, and its very easy for call recipients to confuse those real people with AI.

As I argue in a new Brookings Institution paper, there is clear and immediate value to a broad requirement of AI disclosure in this case and many others. Mandating that companies explicitly note when users are interacting with an automated system can help reduce fraud, improve political discourse, and educate the public.

The believability of these systems is driven by AI models of human language, which are rapidly improving. This is a boon for applications that benefit society, such as automated closed-captioning and language translation. Unfortunately, corporations and political actors are going to find many reasons to use this technology to duplicitously present their software as real people. And companies have an incentive to deceive: A recent randomized experiment showed that when chatbots did not disclose their automated nature, they outperformed inexperienced salespeople. When the chatbot revealed itself as artificial, its sales performance dropped by 80%.

Harmless chatbots can help us order pizza or choose a seasonal flannel, but others are starting to offer financial advice and become the first point of contact in telemedicine. While there are benefits to these systems, it is nave to think they are exclusively designed to inform customersthey may also be intended to change behavior, especially toward spending more money. We can also expect to eventually find AI systems behind celebrity chatbots on platforms such as Instagram and WhatsApp. They will be pitched as a way to bring stars and their fans closer together, but in reality their goals may be to sell pseudoscientific health supplements or expensive athleisure brands. As the technology improves and the datasets expand, AI will only get more effective at driving sales, and customers should have a right to know this influence is coming from automated systems.

Undisclosed algorithms are a problem in political discourse, too. BuzzFeed News has reported that the industry group Broadband for America was behind the 2017 effort to submit millions of fake comments supporting the repeal of net neutrality to the Federal Communications Commissionsometimes using the names of the deceased. The coalition of companies, which includes AT&T, Cox, and Comcast, has faced no consequences for its deceptive use of automation, and the proliferation of AI technologies only makes this kind of political campaign easier in the future.

Bots operating on social media should also be clearly labeled as automated. During the 2016 election, 12.6% of politically engaged Twitter accounts were botsaccounting for 20% of all political tweets. Twitter deserves credit for actively fighting organized disinformation from bots, and for making its data available for research. But the scale of the problem is less known on other digital platforms. The numerous political bot campaigns on WhatsApp and the recent discovery of hundreds of AI-generated Facebook profiles suggest that the influence of automated systems on social media is an extensive problem. Although claims that bots are responsible for swinging major elections is likely overblown, research shows that they can further polarization and reduce dissenting opinions. Bots have also been observed spreading dangerous pseudoscientific messages, for instance against the MRR vaccine. While enforcing bot disclosure is difficult for social media companies, I argue in the Brookings paper that its worth holding the digital platforms accountable to some standards.

Beyond fighting commercial fraud and deceptive politics, there are other advantages to an expansive AI disclosure requirement. If people know when they are interacting with AI systems, they will learn algorithms strengths and limitations through repeated exposure. This is important, since understanding AI is complicated, and most people are misled by the portrayals of incredibly intelligent AI that exists in our popular culturethink Westworld, Battlestar Galactica, and Ex Machina. In reality, todays AI systems are narrow AI, meaning they may perform remarkably well at some tasks while being utterly infantile in others.

Since it would reduce deceptive political and commercial applications, requiring AI systems to disclose themselves turns out to be low-hanging fruit in the typically complex bramble of technology policy. We cant foresee the ways in which AI will be used in the future, but we are only in the first decade of modern AI. Now is the time to set a standard for transparency.

Alex Engler is a David M. Rubenstein Fellow at the Brookings Institution, where he studies the governance of artificial intelligence and emerging technology. He is also an adjunct professor and affiliated scholar at Georgetown Universitys McCourt School of Public Policy, where he teaches courses on data science for policy analysis.

More here:

Why AI systems should disclose that they're not human - Fast Company

Posted in Ai | Comments Off on Why AI systems should disclose that they’re not human – Fast Company

Will Union Budget 2020 Change the AI Scenario of India? – Analytics Insight

Posted: at 9:45 am

The Union Budget 2019 of India had claimed Artificial Intelligence to be high on the agenda in the digital segment of the Budget. This declaration was huge given that China has been consistently building an ecosystem to fuel its aspiration to turn into a world leader in AI by 2030.

The Union Budget 2020 will be reported by the Central Government on 1 February 2020. While a few expectations are being communicated by industry pioneers, experts as well as students, what the government really thinks about must be resolved one week from now when the budget is given. In the meanwhile, business expectations on the Union Budget 2020 keep on pouring in.

As per a report of the China AI Development 2018, somewhere between 2013 and 2018, the investment and financing in the AI innovation area of China represent 60% on the planet. This was valued at $27 billion in 2017. India, in any case, had not declared any budget allocation for its AI plan at that point. In any case, in 2019, NITI Aayog recognized that India, being the fastest-growing economy with the second-biggest population on the planet, has a critical stake in the AI revolution.

As per Dr. Ranjit Nair (PhD AI), CEO and founder of Germin8, the greatest areas where nations go after strength is AI. It not only impacts a huge segment of the commerce parts, yet in addition, in regions like health, national security, cybersecurity, food security, education, and global warming. However, Dr. Ranjith infers that nations like the USA and China are deserting India as far as AI research, AI entrepreneurship and government investment in AI.

As indicated by most industry leaders, what India needs is a move and allocation of assets to fill this critical gap and set the groundwork across different businesses. Along these lines, this time the business is expecting that the finance minister will announce satisfactory assets on reskilling, education, ed-tech to enable the youthful population to have occupations that will be made with tech disruption and move away from ones which may soon become out of date.

For India to have a $5 trillion economy, the young have a key role to carry out in empowering this objective. The advantage of the youth dividend should be channelised and empowered to make the nation famous. Quality education is significant, particularly in the rising fields, for example, artificial intelligence, big data, virtual reality and machine learning separated from the management and services in established segments.

That implies the legislature has put the initial step forward with skilling focus around rising technologies like artificial intelligence and robotics. The government is relied upon to help the reskilling of the IT workforce to assist them with remaining relevant in the worldwide business, experts believe. This implies rising education budgets with a key spotlight on STEM. Also, experts state there is a need to carry automation to the study hall to make education productive and fill the gap in the absence of teachers.

The government can report AI grand challenges that are available to teams from the scholarly world and industry that include takes care of a significant issue for India. The governments job right now is to give a crisp problem definition, give access to the data and obviously give a decent cash prize. Such AI grand challenges will bring about significant issues getting tackled, new companies and occupations, and catch the countrys creative mind and be a driving force to the field of AI.

Perhaps the greatest challenges that startups face is early-stage funding. The government could announce a fund on any semblance of Singapores Temasek that will invest only in early-stage Indian AI startups. Likewise, the government could report lower long term capital gains tax for putting resources into AI-based startups. This will empower more angel investment into AI start-ups. While there are a ton of engineers being delivered in India, we despite everything fall behind different nations as far as the quantity of AI PhDs and AI research. The government should make more research grants accessible for AI research and should likewise offer incentives to institutes that put resources into AI training.

An upskilling fund is the need of the hour today to fight the skills gap made in India by emerging technologies, for example, AI/machine learning. Each business should be ordered to give a learning budget to help their employees consistently upskill and this consumption ought to be repaid to the business through the upskilling fund dispensed as a tax rebate. A corporate commitment towards upskilling boosted by the government will guarantee that HR in the nation are skilled in developing innovations/hot skill areas while energising the workforce and killing all feelings of trepidation for the future.

Originally posted here:

Will Union Budget 2020 Change the AI Scenario of India? - Analytics Insight

Posted in Ai | Comments Off on Will Union Budget 2020 Change the AI Scenario of India? – Analytics Insight

AI-powered robots will be the next big work revolution in warehouses – The Verge

Posted: at 9:45 am

Right now, in a warehouse not far from Berlin, a bright yellow robot is leaning over a conveyor, picking items out of crates with the assurance of a chicken pecking grain.

The robot itself doesnt look that unusual, but what makes it special are its eyes and brain. With the help of a six-lens camera array and machine learning algorithms, its able to grab and pack items that would confound other bots. And thanks to a neural network it will one day share with its fellows in warehouses around the world, anything it learns, theyll learn, too. Show this bot a product its never seen before and itll not only work out how to grasp it, but then feed that information back to its peers.

We tested this robot for three or four months, and it can handle nearly everything we throw at it, Peter Puchwein, vice president of innovation at Knapp, the logistics company that installed the robot, tells The Verge. Were really going to push these onto the market. We want a very high number of these machines out there.

For the bots creators, Californian AI and robotics startup Covariant, the installation in Germany is a big step forward, and one that shows the firm has made great strides with a challenge thats plagued engineers for decades: teaching robots to pick things up.

It sounds easy, but this is a task thats stumped some of the biggest research labs and tech companies. Google has run a stable of robot arms in an attempt to learn how to reliably grasp things (employees jokingly call it the arm pit), while Amazon holds an annual competition challenging startups to stock shelves with robots in the hope of finding a machine good enough for its warehouses (it hasnt yet).

But Covariant claims its bots can do what others cant: work 24 hours a day, picking items without fuss. This doesnt mean that picking is a solved problem (Covariants robots uses suction cups not robotic fingers, making the task easier) but it does unlock a lot of potential. This is particularly true in the world of warehouses and logistics, where experts say its difficult to find human workers and they need all the robots they can get.

Speaking to The Verge, Pieter Abbeel, Covariant co-founder and the director of the Berkeley Robot Learning Lab, compares the current market in robot pickers to that of self-driving cars: theres a lot of hype and flashy demos, but not enough real-world testing and ability.

Our customers dont trust short demo videos anymore, says Abbeel. They know very well most of the difficulty is in consistency and reliability.

Puchwein of Knapp agrees, telling The Verge: The typical thing for startups to do is to show some short, well edited videos. But as soon as you try to test the robots, they fail.

A lot of this hype has been generated by the promise of machine learning. Todays industrial robots can pick with great speed and precision, but only if what theyre grabbing is equally consistent: regular shapes with easy-to-grasp surfaces. Thats fine in manufacturing, where a machine has to grab the same item over and over again, but terrible in retail logistics, where the objects being packed for shipping vary hugely in size and shape.

Hardcoding a robots every move, as with traditional programming, works great in the first scenario but terribly in the second. But if you use machine learning to feed a system data and let it generate its own rules on how to pick instead, it does much, much better.

Covariant uses a variety of AI methods to train its robots, including reinforcement learning: a trial and error process where the robot has a set goal (move object x to location y) and has to solve it itself. Much of this training is done in simulations, where the machines can take their time, often racking up thousands of hours of work. The result is what Abbeel calls the Covariant Brain a nickname for the neural network shared by the companys robots.

Covariant, which was founded in 2017 under the name Embodied Intelligence and comes out of stealth today, is certainly not the only firm applying these methods, though. Numerous startups like Kindred and RightHand Robotics use similar fusions of machine learning and robotics. But Covariant is bullish that its robots are better than anyone elses.

Real world deployments are about extreme consistency and reliability, says Abbeel. In the warehouse in Germany, Covariant claims its machines can pick and pack some 10,000 different items with accuracy greater than 99 percent an impressive figure.

Puchwein agrees, and he would know. Hes got 16 years of experience in the industry, including working for Knapp, one of the largest builders of automated warehouses worldwide. It installed 2,000 systems last year with a turnover of more than 1 billion.

Puchwein says the companys engineers traveled around the world to find the best picking robots and eventually settled on Covariants, which it installs as a nonexclusive partner. Non-AI robots can pick around 10 percent of the products used by our customers, but the AI robot can pick around 95 to 99 percent, says Puchwein. Its a huge difference.

Puchwein isnt the only one on board, either. As it comes out of stealth today, Covariant has announced a raft of private backers, including some of the most high-profile names in AI research. They include Googles head of AI, Jeff Dean; Facebooks head of AI research, Yann LeCun, and one of the godfathers of AI, Geoffrey Hinton. As Abbeel says, the involvement of these individuals is as much about lending their reputation as anything else. Investors arent just about the money they bring to the table, he says.

For all the confidence, investor and otherwise, Covariants operation is incredibly small right now. It has just a handful of robots in operation full time, in America and abroad, in the apparel, pharmaceutical, and electronics industries.

In Germany, Covariants picking robot (theres just one for now) is packing electronics components for a firm named Obeta, but the company says its eager for more robots to compensate for a staff shortage a situation common in logistics.

For all the talk of robots taking human jobs, there just arent enough humans to do some jobs. One recent industry report suggests 54 percent of logistics companies face staff shortages in the next five years, with warehouse workers among the most in-demand positions. Low wages, long hours, and boring working conditions are cited as contributing factors, as is a falling unemployment rate (in the US at least).

Its very hard to find people to do this sort of work, Michael Pultke of Obeta tells The Verge through a translator. He says Obeta relies on migrant workers to staff the companys warehouses, and that the situation is the same across Europe. The future is more robots.

And what about the employees that Covariants robots now operate alongside do they mind the change? According to Pultke, they dont see it as a threat, but an opportunity to learn how to maintain the robots and get a better type of job. Machines should do the base work, which is stupid and simple, says Pultke. People should look after the machines.

Go here to see the original:

AI-powered robots will be the next big work revolution in warehouses - The Verge

Posted in Ai | Comments Off on AI-powered robots will be the next big work revolution in warehouses – The Verge

‘The Social Dilemma’ and ‘Coded Bias’ docs sound the alarm on AI – Mashable

Posted: at 9:45 am

Cautionary tales about AI were all over Sundance screens this year, and not as sci-fi flights of fancy.

Two new documentaries, The Social Dilemma and Coded Bias, dig into the pitfalls of artificial intelligence as it currently exists manipulating our social-media feeds, determining our financial or professional futures, surveilling us on the streets and what they find isn't pretty.

Of the two, The Social Dilemma feels broader in scope. Over 93 minutes, it touches upon surveillance, capitalism, addiction, and polarization; looks into social media's detrimental effects on everything from self-esteem to democracy; serves up personal anecdotes and emphatic pleas and detailed data analyses. Any one of these topics could have made for a compelling documentary in its own right. Collect them all in one place, and The Social Dilemma can get to feel a bit unwieldy.

Maybe The Social Dilemma doesn't have all the answers, but it's a good start to figuring out the questions.

The film's strength lies in the impressive array of talking-head interviews Orlowski has collected from tech-industry insiders and academics including Tim Kendall, former president of Pinterest and former director of monetization at Facebook; Tristan Harris, the co-founder of the Center for Humane Technology described in archival news footage as "the closest thing the tech industry has to a conscience"; and Rashida Richardson, director of policy research at the AI Now Institute.

His subjects are clearly knowledgeable and passionate about the issue they've been tapped to address, and, in some cases, more than a little aghast at the system they themselves have helped create. When these people warn you that the AI employed by social media companies are smarter than you are, or point out the subtle design choices employed to make you click, or explain how Pizzagate got served to social media users who never even searched for it, you're inclined to listen.

Less effective is a cheesy fictional vignette woven in throughout, following a teenage boy (Skyler Gisondo) who gets a little too hooked on social media. (Vincent Kartheiser has a triple role as the "algorithms" determining what he sees on his feeds.) The dialogue and acting have the stilted, generic quality of an educational video, which undermines the very urgency that the narrative is meant to emphasize.

Ultimately, though, Orlowski manages to build a strong case for simply being aware of how social media is hacking us, and why, and why it matters. By the time the film cut to archival footage of Mark Zuckerberg suggesting the solution to Facebook's 2016 election woes was more AI, my audience knew enough to laugh derisively. Maybe The Social Dilemma doesn't have all the answers, but it's a good start to figuring out the questions.

Joy Buolamwini in 'Coded Bias.'

Image: Sundance Institute

It also serves as an ideal, if unintentional, jumping-off point to Coded Bias. Directed by Shalini Kantayya, the documentary begins with Joy Buolamwini's realization as an MIT student that facial-recognition technology had a harder time identifying certain types of faces like her own dark-skinned female one and follows her down the rabbit hole to examine the serious consequences of that seemingly minor annoyance.

Kantayya delves into the very human biases baked into artificial intelligence by its largely white and male creators, and the problems that ensue when these black-box programs are assumed to be neutral: the violations of civil rights, the discriminatory decisions in hiring, housing, and criminal justice. Far from creating a more level playing field through impartial judgment, Coded Bias argues, AI has the potential to exacerbate existing inequalities.

Notably, Coded Bias is built around exactly the kinds of people disadvantaged by these AI issues.

Notably, the film is built around exactly the kinds of people disadvantaged by these issues. Almost all of the experts interviewed in the movie are women, including Weapons of Math Destruction author Cathy O'Neil (who also appears in The Social Dilemma), Big Brother Watch director Silkie Carlo, and futurist Amy Webb. We meet tenants of a housing project about to install facial-recognition software, and a beloved teacher who's been fired after an algorithm deemed his performance poor, and a young woman navigating China's state-run Social Credit System.

Through them, bias in AI becomes a concrete, human concern, rather than an abstract possibility. And through them, it becomes an issue to rally around. Without shying away the issue's enormity or its devastating consequences, Coded Bias gradually works toward almost inspirational vibe, as Buolamwini and others get to work solving the problem they've identified.

We're invited to share in her triumph when she gets the opportunity to testify before the House that "algorithmic justice one of the biggest civil rights concerns we have," or Carlo's when she teams with British politician Jenny Jones to bring a legal challenge against the London PD's use of facial recognition cameras.

These victories prove essential in reminding us that something can be done. Not in the vague "somebody should do something" sense, but in the more concrete way of pressuring governments to pass laws regulating the use of algorithms, or calling out corporations using faulty AI to unjust ends. Hang on to the can-do spirit those little wins engender as both Coded Bias and The Social Dilemma will tell you, we'll need it to take on the massive amount of work still left to be done.

Read the original:

'The Social Dilemma' and 'Coded Bias' docs sound the alarm on AI - Mashable

Posted in Ai | Comments Off on ‘The Social Dilemma’ and ‘Coded Bias’ docs sound the alarm on AI – Mashable

AI Stats News: 35% Of Workers Worldwide Expect Their Job Will Be Automated – Forbes

Posted: at 9:45 am

Recent surveys, studies, forecasts and other quantitative assessments of the progress of AI highlight anxiety about AI eliminating jobs, the competition for AI talent, questions about employees AI preparedness, and data quality, literacy, privacy, and security.

35% of workers across 28 countries expect their job will be automated in the next 10 years

The future of work

35% of workers across 28 countries expect their job will be automated in the next 10 years; 29% of workers (82% of US workers) are very confident (40% somewhat confident) they have the skills needed so their job continues to exist in the future; workers with a higher level of education (36%) expect their job to be automated at roughly the same rate as those who do not have a higher education (32%); countries where workers are most likely to anticipate that their job will be automated in the next decade are: India (71%), Saudi Arabia (56%), China (55%), Brazil (51%), and Mexico (50%) [Ipsos survey of 13,751 adults in 28 countries].

Major cities are battling to become AI hubs and attract relevant talent: New York City ranks #1 in the world,ahead of London, Singapore and San Francisco. Boston (#5) and Los Angeles (#9) also made the top 10 cities, meaning40% of the worlds most competitive cities are in the U.S. Hong Kong ranked #6, Paris #7, Tokyo #8, Munich #10[INSEAD Global Talent Competitive Index (GTCI) developed in partnershipwith theAdecco GroupandGoogle].

AI business adoption

79% of C-level executives believe their employees are well prepared for AI, compared to only 38% of managers; adoption and deployment challenges include lack of understanding of AI capabilities (46%), lack of training (36%), and lack of initial investment funding (32%); AI is more hype than reality right now: Transportation (69%), technology (57%), healthcare (52%), retail (64%), financial services(42%) [KPMG survey of 750 business decision-makers worldwide].

61% of manufacturing companies say they need to reevaluate the way they implement AI projects; 17% say their company was in the full implementation stage of their AI projects; 72% say it took more time than anticipated for their company to implement the technical/data collection infrastructure needed to take advantage of the benefits of AI; 20% implemented AI initiatives due to industry or peer pressure to utilize the technology; 60% say their company struggled to come to a consensus on a focused, practical strategy for implementing AI [Plutoshift survey of 250 manufacturing professionals].

The Life of Data, the fuel for AI: Quality and literacy

75% of C-Suite executives aren't confident in the quality of their data; 46% of data professionals report spending over 10 hours properly preparing data for an analytics and AI/ML initiative while others spend as much as 40 hours on data preparation processes alone on a weekly basis; poor data quality caused AI/ML projects to take longer (38%), cost more (36%), and fail to achieve the anticipated results (33%) [Trifacta survey of 646 data professionals].

61% report that data-overload has contributed to workplace stress, culminating in nearly 31% of the global workforce taking at least one day of sick leave due to stress related to information, data and technology issues; each year companies lose an average of more than five working days (43 hours) per employee, due to procrastination and sick leave that stem from stress around information, data and technology issues, and equate to billions in lost productivity around the globe; despite nearly all employees (87%) recognizing data as an asset, only 25% believe theyre fully prepared to use data effectively, and just 21% report being confident in their data literacy skillstheir ability to read, understand, question and work with data; only 37% trust their decisions more when based on data, and 48% frequently defer to a gut feeling rather than data-driven insights when making decisions [Accenture and Qlik survey of 9,000 employees in the UK, USA, Germany, France, Singapore, Sweden, Japan, Australia and India].

49% of IT organizations state that data is their business with another 31% expecting to offer data-centric products and services within the next 24 months [ESG].

The Life of Data, the fuel for AI: Privacy

83% of Americans expect to have control over how their data is used at a business; 65% would like to know and have access to what information businesses are collecting about them; 62% of people would like the right to opt-out and tell a business not to share or sell personal information; 58% of people would like the right to protections against businesses that do not uphold the value of their privacy; 49% of people would like the right to delete their personal data held by the business; 82% think there should be a national privacy law to protect their personal data; 73%would pay more to online services companies (retailers, ecommerce, and social media) to ensure they didn't sell their data, show them ads, or use their data for marketing or sales purposes; 49% have had their personal data involved in a large corporate data breach; only 24 are familiar or have heard of CCPA [DataGrail surveyof 2,000 US adults].

88% of Americans would share their healthcare data to develop cancer therapies; 82% believe patients should be compensated for sharing health data; 68%+ have favorable attitudes towards AI in oncology therapy development, and expect it to improve cancer treatment; those aged 60+ are much less likely to want compensation for their healthcare data than those aged 18-44 [Lantern Pharma survey of 1,054 US adults].

Over 70% of organizations (up from 40% last year) say they receive significant business benefits from privacy efforts beyond compliance, including improved attractiveness to investors; organizations, on average, receive benefits 2.7 times their investment, and more than 40% are seeing benefits that are at least twice that of their privacy spend [Cisco survey of 2,800 security professionals].

The Life of Data, the fuel for AI: Security

70% of cybersecurity professionals investigate more than 10 security alerts daily, a marked increase from 2018 when just 45% reported investigating double-digit alerts each day; survey respondents report a false-positive rate of 50% or higher; 78% said it takes more than 10 minutes to investigate each alert, a significant increase from 64% who said the same in 2018; 41% believe their primary responsibility is to analyze and remediate threats, opting instead to reduce investigation times and alert volumes, a dramatic decrease from 70% in 2018 [CRITICALSTART survey of more than 50 Security Operations Center (SOC) professionals]

88% of small- and medium size (SMB) cybersecurity professionals report high levels of interest in adopting AI within their business; 70% of those interested were not aware of potential cybersecurity risks that could accompany its use; 54% of all SMBs interested in AI will move forward with adoption despite the known risks, as they believe the benefits outweigh the risks [Zix-AppRiver survey of 1,049 cybersecurity decision-makers in U.S. SMBs (fewer than 250 employees)].

AI quotable quotes

There is no monopoly on math. Absent a very strong federal privacy law, were all screwedAl Gidari, Stanford Law School

See the original post here:

AI Stats News: 35% Of Workers Worldwide Expect Their Job Will Be Automated - Forbes

Posted in Ai | Comments Off on AI Stats News: 35% Of Workers Worldwide Expect Their Job Will Be Automated – Forbes

Artificial intelligence, geopolitics, and information integrity – Brookings Institution

Posted: at 9:45 am

Much has been written, and rightly so, about the potential that artificial intelligence (AI) can be used to create and promote misinformation. But there is a less well-recognized but equally important application for AI in helping to detect misinformation and limit its spread. This dual role will be particularly important in geopolitics, which is closely tied to how governments shape and react to public opinion both within and beyond their borders. And it is important for another reason as well: While nation-state interest in information is certainly not new, the incorporation of AI into the information ecosystem is set to accelerate as machine learning and related technologies experience continued advances.

The present article explores the intersection of AI and information integrity in the specific context of geopolitics. Before addressing that topic further, it is important to underscore that the geopolitical implications of AI go far beyond information. AI will reshape defense, manufacturing, trade, and many other geopolitically relevant sectors. But information is unique because information flows determine what people know about their own country and the events within it, as well as what they know about events occurring on a global scale. And information flows are also critical inputs to government decisions regarding defense, national security, and the promotion of economic growth. Thus, a full accounting of how AI will influence geopolitics of necessity requires engaging with its application in the information ecosystem.

This article begins with an exploration of some of the key factors that will shape the use of AI in future digital information technologies. It then considers how AI can be applied to both the creation and detection of misinformation. The final section addresses how AI will impact efforts by nation-states to promoteor impedeinformation integrity.

Read and download the full article, Artificial intelligence, geopolitics, and information integrity.

Read the original post:

Artificial intelligence, geopolitics, and information integrity - Brookings Institution

Posted in Ai | Comments Off on Artificial intelligence, geopolitics, and information integrity – Brookings Institution

Extraterrestrial Technosignatures AI of the Future Could Reveal the Incomprehensible – The Daily Galaxy –Great Discoveries Channel

Posted: at 9:45 am

Posted on Jan 30, 2020 in Science

If AI identifies something our mind cannot understand or accept, could it in the future go beyond our level of consciousness and open doors to reality for which we are not prepared? What if the square and triangle of Vinalia Faculae in Ceres were artificial structures? asked Spanish clinical neuropsychologist Gabriel G. De la Torre about the application of artificial intelligence to the search for extra-terrestrial intelligence, and the identification of a possible technosignature a square structure within a triangular one in a crater on the dwarf planet Ceres.

The result of this intriguing visual experiment calls into question the application of artificial intelligence to the search for extra-terrestrial intelligence (SETI).

De la Torres research study, Does artificial intelligence dream of non-terrestrial techno-signatures? suggests that one of the potential applications of artificial intelligence is not only to assist in big data analysis but to help to discern possible artificiality or oddities in patterns of either radio signals, megastructures or techno-signatures in general.

Our form of life and intelligence, observed Silvano P. Colombano at NASAs Ames Research Center not involved in the Ceres experiment, may just be a tiny first step in a continuing evolution that may well produce forms of intelligence that are far superior to ours and no longer based on carbon machinery.

The result of De la Torres intriguing visual experiment calls into question the application of artificial intelligence to the search for extra-terrestrial intelligence (SETI) where advanced and ancient technological civilizations may exist but be beyond our comprehension or ability to detect.

Ceres, although the largest object in the main asteroid belt, is a dwarf planet. It became famous a few years ago for one of its craters: Occator, where some bright spots were observed, leading to all manner of speculations. The mystery was solved when NASAs Dawn probe came close enough to discover that these bright spots originated from volcanic ice and salt emissions.

Researchers from the University of Cadiz (Spain) have looked at one of these spots, called Vinalia Faculae, and have been struck by an area where geometric shapes are ostensibly observable. This peculiarity has served them to propose a curious experiment: to compare how human beings and machines recognize planetary images. The ultimate goal was to analyse whether artificial intelligence (AI) can help discover technosignatures of possible extra-terrestrial civilizations.

We werent alone in this, some people seemed to discern a square shape in Vinalia Faculae, so we saw it as an opportunity to confront human intelligence with artificial intelligence in a cognitive task of visual perception, not just a routine task, but a challenging one with implications bearing on the search for extraterrestrial life (SETI), no longer based solely on radio waves, explains Gabriel G. De la Torre.

Alien Technosignatures Buried in the Radio Pipeline Data?

The team of this neuropsychologist from the University of Cadiz, who has already studied the problem of undetected non terrestrial intelligent signals (the cosmic gorilla effect), now brought together 163 volunteers with no training in astronomy to determine what they saw in the images of Occator.

They then did the same with an artificial vision system based on convolutional neural networks (CNN), previously trained with thousands of images of squares and triangles so as to be able to identify them.

Both people and artificial intelligence detected a square structure in the images, but the AI also identified a triangle, notes De la Torre, and when the triangular option was shown to humans, the percentage of persons claiming to see it also increased significantly. The square seemed to be inscribed in the triangle.

Undetectable NASA Suggests We May Be Blind to Signs of Alien Technologies

These results, published in the Acta Astronautica journal, have allowed researchers to draw several conclusions: On the one hand, despite being fashionable and having a multitude of applications, artificial intelligence could confuse us and tell us that it has detected impossible or false things, says De la Torre, and this therefore compromises its usefulness in tasks such as the search for extra-terrestrial technosignatures in some cases. We must be careful with its implementation and use in SETI.

Finally, the neuropsychologist points out that AI systems suffer from the same problems as their creators: The implications of biases in their development should be further studied while they are being supervised by humans.

De la Torre concludes by acknowledging that, in reality, we dont know what it is, but what artificial intelligence has detected in Vinalia Faculae is most probably just a play of light and shadow.

Source: Gabriel G. De la Torre. Does artificial intelligence dream of non-terrestrial techno-signatures? Acta Astronautica 167: 280-285, February 2020.

The Daily Galaxy, Jake Burba, via FECYT Spanish Foundation for Science and Technology

Excerpt from:

Extraterrestrial Technosignatures AI of the Future Could Reveal the Incomprehensible - The Daily Galaxy --Great Discoveries Channel

Posted in Ai | Comments Off on Extraterrestrial Technosignatures AI of the Future Could Reveal the Incomprehensible – The Daily Galaxy –Great Discoveries Channel

AI Can Identify Embryos with Highest Likelihood of Success During IVF – Analytics Insight

Posted: at 9:45 am

As more and more AI spreads its reach across the healthcare sector, we hear new and innovative approaches to maintain our health and fitness. Recently, it was noted that AI is being used in IVF to select embryos with the highest chance of resulting in a successful pregnancy. The algorithm used for this is known as Ivy which analyses time-lapse videos of embryos as they are incubated after being fertilized. Ivy further identifies which ones have the highest likelihood of successful development.

The mechanism was developed by Harrison.AI which is a tech firm based in Sydney. The CEO and Co-founder of the company Aengus Tran is himself the inventor of Ivy. It has been used for several thousand women undergoing IVF in Australia. Notably, women who undergo IVF using Ivy are informed about the algorithm and consent to its use.Aengus is a medicine student at the University of New South Wales and has designed a pioneering artificial intelligence system that is helping embryologists improve IVF pregnancy rates.

According to a report, Aengus identified that AI could be used to make IVF-related decisions faster and also better based on machine learning from thousands of previous successful and unsuccessful embryos and ultimately designed a system that is now known as Ivy. Together with his brother Dimitry an AGSM @ UNSW Business School Executive MBA alumnus they set up a company called Harrison.AI.

Moreover, Ivy is a self-improving AI system that continuously learns from the embryos it analyses via a comprehensive three-dimensional assessment of the growth of the embryos through all stages of development in an incubator. It then relates this data to whether a foetal heart has developed or not.

Aengus who also serves as the Chief Data Scientist at Harrison.AI said, Ivy has taught itself how to select out the embryo with the highest potential to create a foetal heart It starts with a completely blank canvas and its not influenced by any previous human knowledge or bias. It has learned directly from thousands of embryos that have had a known foetal heart outcome and has slowly and steadily improved itself to become better and better at selecting embryos.

To note, the company is now in partnership with Virtus Health which is a leading Australian provider of assisted reproductive services, and poised to introduce Ivy technology in IVF Australia clinics nationwide and also across Europe later this year. Harrison.AI is also hoping to branch out the technology to help patients suffering from various health issues, including lung problems and eye disease.

Dimitry, Chairman at Harrison.AI said, A lot of very exciting things are possible right now because of the level were at with computing power. Its opening up a whole new field of innovation and were very much in the early days of working out how to apply AI to health care.

He further added, In the past, people used to talk about the Trifecta problem. You could not have something that was very fast, very good and very cheap you could only pick two of those to be correct. But now for the first time with AI it is possible to be fast, accurate and cheap.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Smriti is a Content Analyst at Analytics Insight. She writes Tech/Business articles for Analytics Insight. Her creative work can be confirmed @analyticsinsight.net. She adores crushing over books, crafts, creative works and people, movies and music from eternity!!

Continued here:

AI Can Identify Embryos with Highest Likelihood of Success During IVF - Analytics Insight

Posted in Ai | Comments Off on AI Can Identify Embryos with Highest Likelihood of Success During IVF – Analytics Insight

Artificial Intelligence (AI) in battling the coronavirus – ELE Times

Posted: at 9:45 am

Artificial Intelligence technology can today automatically mine through news reports and online content from around the world, helping experts recognize anomalies that could lead to a potential epidemic or, worse, a pandemic. In other words, our new AI overlords might actually help us survive the next plague. These new AI capabilities are on full display with the recent coronavirus outbreak, which was identified early by a Canadian firm called BlueDot, which is one of a number of companies that use data to evaluate public health risks.

The company, which says it conducts automated infectious disease surveillance, notified its customers about the new form of coronavirus at the end of December, days before both the US Centers for Disease Control and Prevention (CDC) and the World Health Organization (WHO) sent out official notices, as reported by Wired. Now nearing the end of January, the respiratory virus thats been linked to the city of Wuhan in China has already claimed the lives of more than 100 people. Cases have also popped up in several other countries, including the United States, and the CDC is warning Americans to avoid non-essential travel to China.

But artificial intelligence can be far more useful than just keeping epidemiologists and officials informed as a disease pops up. Researchers have built AI-based models that can predict outbreaks of the Zika virus in real time, which can inform how doctors respond to potential crises. Artificial intelligence could also be used to guide how public health officials distribute resources during a crisis. In effect, AI stands to be a new first line of defense against disease.

Other data, like traveler itinerary information and flight paths, can help give the company additional hints about how a disease will likely spread. For instance, earlier this month, BlueDot researchers predicted other cities in Asia where the coronavirus would show up after it appeared in mainland China.

The idea behind BlueDots model to get information to health care workers as quickly as possible, with the hope that they can diagnose and, if needed, isolate infected and potentially contagious people early on.

But artificial intelligence can be far more useful than just keeping epidemiologists and officials informed as a disease pops up. Researchers have built AI-based models that can predict outbreaks of the Zika virus in real time, which can inform how doctors respond to potential crises. Artificial intelligence could also be used to guide how public health officials distribute resources during a crisis. In effect, AI stands to be a new first line of defense against disease.

More broadly, AI is already assisting with researching new drugs, tackling rare diseases, and detecting breast cancer. AI was even used to identify insects that spread Chagas, an incurable and potentially deadly disease that has infected an estimated 8 million people in Mexico and Central and South America. Theres also increasing interest in using non-health data like social media posts to help health policymakers and drug companies understand the breadth of a health crisis. For instance, AI that can mine social media posts to track illegal opioid sales, and keep public health officials informed about these controlled substances spread.

Still, all of these advancements represent a more optimistic outlook for what AI can do. Typically, news of AI robots sifting through large swathes of data doesnt sit so well. Think of law enforcement using facial recognition databases built on images mined from across the web. Or hiring managers who can now use AI to predict how youll behave at work, based on your social media posts. The idea of AI battling deadly disease offers a case where we might feel slightly less uneasy, if not altogether hopeful. Perhaps this technology if developed and used properly could actually help save some lives.

Similarly, the epidemic-monitoring company Metabiota determined that Thailand, South Korea, Japan, and Taiwan had the highest risk of seeing the virus show up more than a week before cases in those countries were actually reported, partially by looking to flight data. Metabiota, like BlueDot, uses natural-language processing to evaluate online reports about a potential disease, and its also working on developing the same technology for social media data.

More:

Artificial Intelligence (AI) in battling the coronavirus - ELE Times

Posted in Ai | Comments Off on Artificial Intelligence (AI) in battling the coronavirus – ELE Times

Why 2020 Will Be the Year Artificial Intelligence Stops Being Optional for Security – Security Intelligence

Posted: at 9:45 am

Artificial intelligence (AI) isnt new. What is new is the growing ubiquity of AI in large organizations. In fact, by the end of this year, I believe nearly every type of large organization will find AI-based cybersecurity tools indispensable.

Artificial intelligence is many things to many people. One fairly neutral definition is that its a branch of computer science that focuses on intelligent behavior, such as learning and problem solving. Now that cybersecurity AI is mainstream, its time to stop treating AI like some kind of magic pixie dust that solves every problem and start understanding its everyday necessity in the new cybersecurity landscape. 2020 is the year large organizations will come to rely on AI for security.

AI isnt magic, but for many specific use cases, the right tool for the job will increasingly involve AI. Here are six reasons why thats the case.

The monetary calculation every organization must make is the cost of security tools, programs and resources on one hand versus the cost of failing to secure vital assets on the other. That calculation is becoming easier as the potential cost of data breaches grows. And these costs arent stemming from the cleanup operation alone; they may also include damage to the brand, drops in stock prices and loss of productivity.

The average total cost of a data breach is now $3.92 million, according to the 2019 Cost of a Data Breach Report. Thats an increase of nearly 12 percent since 2014. The rising costs are also global, as Juniper Research predicts that the business costs of data breaches will exceed $5 trillion per year by 2024, with regulatory fines included.

These rising costs are partly due to the fact that malware is growing more destructive. Ransomware, for example, is moving beyond preventing file access and toward going after critical files and even master boot records.

Fortunately, AI can help security operations centers (SOCs) deal with these rising risks and costs. Indeed, the Cost of a Data Breach Report found that cybersecurity AI can decrease average costs by $230,000.

The percentage of state-sponsored cyberattacks against organizations of all kinds is also growing. In 2019, nearly one-quarter (23 percent) of breaches analyzed by Verizon were identified as having been funded or otherwise supported by nation-states or state-sponsored actors up from 12 percent in the previous year. This is concerning because state-sponsored attacks tend to be far more capable than garden-variety cybercrime attacks, and detecting and containing these threats often requires AI assistance.

An arms race between adversarial AI and defensive AI is coming. Thats just another way of saying that cybercriminals are coming at organizations with AI-based methods sold on the dark web to avoid setting off intrusion alarms and defeat authentication measures. So-called polymorphic malware and metamorphic malware change and adapt to avoid detection, with the latter making more drastic and hard-to-detect changes with its code.

Even social engineering is getting the artificial intelligence treatment. Weve already seen deepfake audio attacks where AI-generated voices impersonating three CEOs were used against three different companies. Deepfake audio and video simulations are created using generative adversarial network (GAN) technologies, where two neural networks train each other (one learning to create fake data and the other learning to judge its quality) until the first can create convincing simulations.

GAN technology can, in theory and in practice, be used to generate all kinds of fake data, including fingerprints and other biometric data. Some security experts predict that future iterations of malware will use AI to determine whether they are in a sandbox or not. Sandbox-evading malware would naturally be harder to detect using traditional methods.

Cybercriminals could also use AI to find new targets, especially internet of things (IoT) targets. This may contribute to more attacks against warehouses, factory equipment and office equipment. Accordingly, the best defense against AI-enhanced attacks of all kinds is cybersecurity AI.

Large organizations are suffering from a chronic expertise shortage in the cybersecurity field, and this shortage will continue unless things change. To that end, AI-based tools can enable enterprises to do more with the limited human resources already present in-house.

The Accenture Security Index found that more than 70 percent of organizations worldwide struggle to identify what their high-value assets are. AI can be a powerful tool for identifying these assets for protection.

The quantity of data that has to be sifted through to identify threats is vast and growing. Fortunately, machine learning is well-suited to processing huge data sets and eliminating false positives.

In addition, rapid in-house software development may be creating many new vulnerabilities, but AI can find errors in code far more quickly than humans. To embrace rapid application development (RAD) requires the use of AI for bug fixing.

These are just two examples of how growing complexity can inform and demand the adoption of AI-based tools in an enterprise.

There has always been tension between the need for better security and the need for higher productivity. The most usable systems are not secure, and the most secure systems are often unusable. Striking the right balance between the two is vital, but achieving this balance is becoming more difficult as attack methods grow more aggressive.

AI will likely come into your organization through the evolution of basic security practices. For instance, consider the standard security practice of authenticating employee and customer identities. As cybercriminals get better at spoofing users, stealing passwords and so on, organizations will be more incentivized to embrace advanced authentication technologies, such as AI-based facial recognition, gait recognition, voice recognition, keystroke dynamics and other biometrics.

The 2019 Verizon Data Breach Investigations Report found that 81 percent of hacking-related breaches involved weak or stolen passwords. To counteract these attacks, sophisticated AI-based tools that enhance authentication can be leveraged. For example, AI tools that continuously estimate risk levels whenever employees or customers access resources from the organization could prompt identification systems to require two-factor authentication when the AI component detects suspicious or risky behavior.

A big part of the solution going forward is leveraging both AI and biometrics to enable greater security without overburdening employees and customers.

One of the biggest reasons why employing AI will be so critical this year is that doing so will likely be unavoidable. AI is being built into security tools and services of all kinds, so its time to change our thinking around AIs role in enterprise security. Where it was once an exotic option, it is quickly becoming a mainstream necessity. How will you use AI to protect your organization?

Visit link:

Why 2020 Will Be the Year Artificial Intelligence Stops Being Optional for Security - Security Intelligence

Posted in Ai | Comments Off on Why 2020 Will Be the Year Artificial Intelligence Stops Being Optional for Security – Security Intelligence

Page 181«..1020..180181182183..190200..»