AI Could Start Third World War: Alibaba’s Jack Ma (BABA) – Investopedia

Alibaba Group Holding Limited (BABA) chairman Jack Ma is preparing for the Third World War. Or at least it would seem that way from his comments to television network CNBC during an interview. According to Ma, advances in technology have caused world wars. "The first technology revolution caused World War I. The second technology revolution caused World War II. This is the third technology revolution," he said. But he did not outline the possible causes for this war.

Ma's interview was wide ranging and covered disparate topics that ranged from the future of humanity to the difference between wisdom and intelligence. He sketched the contours of a future world disrupted by artificial intelligence (AI) trends. According to Ma, the next 30 years will be marked by "very painful" changes for humanity as it enters an age defined by data and artificial intelligence. Ma said that humans will win in a war with machines. This is because machines do not possess wisdom, which comes from the heart. (See also: Alibaba's Ma: We're Not Looking to Invade US.)

That said, the age of machines will witness far-reaching changes. As machines take over labor-intensive tasks, the working week will diminish to 16 hours, Ma predicted. The extra leisure time will create a mobile population that will work across borders and put a stop to the backlash against globalization. "The only thing is how can we make trade more inclusive, knowledge more inclusive, and this is how we can deal with the instability of the world (that machines will create)," he said. Governments will have to make "hard choices," Ma added. (See also: Jack Ma: Success Story.)

Among those choices will the decision to open up borders to enable cross-border e-commerce. According to current rules, it is difficult for small businesses to trade across borders using e-commerce sites due to a phalanx of customs and duty provisions. Ma's e-commerce juggernaut is leading the charge for international e-commerce and already has a thriving business in Tmall, its cross-border e-commerce site that sells overseas goods in China. (See also: Special Delivery: Alibaba Wants Faster Traffic to Europe.)

Original post:

AI Could Start Third World War: Alibaba's Jack Ma (BABA) - Investopedia

Soccer looks to AI for an edge: Could an algorithm really predict injuries? – ESPN

Artificial intelligence can drive a car, curate the films and documentaries that you watch, develop chess programmes capable of beating grandmasters and use your face to access your phone. And, one company claims, it can also predict when footballers are about to suffer an injury.

Off the field, football has gone through a huge transformation in the 21st century, with the emergence of GPS-driven player performance data in the early 2000s, followed in the 2010s by the advanced analytics that now form a major part of every top club's player recruitment strategy. Just last month, Manchester City announced the appointment of Laurie Shaw to a new post of lead AI scientist at the Etihad Stadium, taking him from his role as research scientist and lecturer at Harvard University.

Football has always searched out innovations to make small, but crucial, differences. Many have become staples of the game, including TechnoGym to improve biomechanics, IntelliGym to improve cognitive processing and cryogenic gym sessions to ease the strain on muscles. Others have fallen by the wayside. Anyone remember nasal strips or the ball-bending properties of Predator boots?

The use of AI to predict when players are on the brink of suffering an injury could prove to be the next game-changing innovation that becomes a key component at the elite end of the game.

In a game dominated by clubs wanting to discover the extra 1% in marginal gains, keeping a player fit is arguably the most important challenge facing any coach. A depleted squad can lead to negative results and, if a team suffers too many, the manager or coach is generally the one who pays the price. This season has been more challenging than most, with the COVID-19 pandemic leading to fixtures being crammed into a reduced time frame, and players being forced to play 2-3 games a week on a regular basis.

- Stream ESPN FC Daily on ESPN+ (U.S. only)- ESPN+ viewer's guide: Bundesliga, Serie A, MLS, FA Cup and more

The toll on players' fitness is borne out by the injury lists. Crystal Palace and Southampton fulfilled their midweek Premier League fixtures with 10 first-team squad members sidelined. Champions Liverpool lost to Brighton on Wednesday with eight absentees, including long-term injury victims Virgil van Dijk, Joe Gomez and Joel Matip. Research by premierinjuries.com shows that up to and including match-week 21 of the Premier League this season, there has been a five percent increase in time lost to injuries this season. At the same stage last season, there were 356 "time-loss absences" (a player missing at least one league game), but the number has jumped to 374 this time around. With COVID-related absences, the number is 435.

Liverpool had suffered 14 time-loss absences at this stage of last season, but they're now up to 29 in 2020-21. Their league position -- fourth place, seven points adrift of top spot -- suggests they are paying a price for their sharp increase in players lost to injury.

But finding reliable injury prevention technology is the holy grail of sports scientists and fitness coaches. By November, ESPN reported a 16% rise in muscle injuries in the Premier League compared to the same stage last season. So can AI successfully predict when players are about to be injured?

Since the start of the 2017-18 season, La Liga side Getafe have partnered with the California-based AI company Zone7 to break down performance data and predict when players are at risk of injury. In simple terms, clubs like Getafe in Spain, Scottish Premiership leaders Rangers and MLS sides Real Salt Lake and Toronto FC send their training and match data to Zone7, who analyze it using their algorithm and send back daily emails with information about players who may be straying close to the so-called "danger zone."

Between the start of the 2017-18 season and March 2020, when La Liga was suspended due to the COVID-19 pandemic, Getafe recorded a substantial reduction in injuries.

"Three seasons ago, during the first year with Zone7, we saw a reduction of 40% in injury volume," Javier Vidal, the Getafe's Head of Performance, said. "As the Zone7 engine became more reliable and we had access to more data in the second year, we saw a reduction of 66 percent in the volume of injuries.

"This means that of every three injuries we had two seasons ago, we now have only one."

Jordi Cruyff, the former Barcelona and Manchester United midfielder, told ESPN that he has become a "minor, minor investor" in Zone7 after trialling the AI tool during his time as sporting director at Maccabi Tel Aviv in 2017. But he admits that he was only convinced by the AI technology after monitoring the data, even though Maccabi's then-coach declined to use it.

"I presented the tool to our then-coach and he wasn't too interested." Cruyff told ESPN. "So for the four to five months the coach was in charge, he would follow his own plan, but we would still give our performance data to the company, which they would run through their algorithm. I would then receive an email before training each day with which players were at risk and it actually predicted five of seven injuries.

"I thought 'wow.' Once or twice could be a coincidence, but catching five out of seven muscular injuries is a different thing. I would wait until after training to be told if a player had been injured. I would then go back to look at my email and there was the name. We were lucky in some ways that the coach wasn't interested in it because it gave us the chance to test it.

2 Related

"It was the perfect test, although I wish the coach would have listened, because then we would have avoided some injuries."

Tal Brown, who founded Zone7 with Eyal Eliakim in 2017 having worked together in the Israeli Defense Forces Intelligence Corps, spoke to ESPN to explain how AI can be used to detect injury risk.

"Every single player is now using a GPS vest, they are being tested for strength and flexibility at their clubs, many teams distribute watches to their players to measure sleep, so the reality is that somebody working for a club needs to look at two dozen dashboards every day -- multiplied by 20 players, multiplied by six days a week," Brown said via Zoom. "It is becoming a puzzle that a human brain wasn't really meant to solve.

"We can use a chess metaphor. Chess programmes used to be pretty simplistic and the experts could beat them, but today, a Google chess programme is unbeatable. It's not because Google has taught that chess programme 10,000 equations manually, it is because the programme has automatically studied every recorded chess game played in the history of mankind and, using AI, has developed its own understanding and interpretation.

"We are not there yet as a company. We don't have access to every single football injury that ever occurred, but we are getting much better and there will be a point where a programme focused on injury risk will out-perform humans in interpreting data."

More than 50 clubs across the world now use Zone7's AI programme. Many wish to remain anonymous, in an effort to protect any competitive advantage that the tool may provide -- football clubs are notoriously protective of such proprietary data -- while others simply do not wish to discuss any pros or cons they have discovered while using it. Despite repeated attempts by ESPN to speak to Real Salt Lake and Toronto, neither MLS team responded to enquiries.

1:32

Julien Laurens puts Eden Hazard's latest injury into context for Real Madrid.

Rangers, 23 points clear at the top of the Scottish Premiership and on course for a first domestic title since 2011, adopted Zone7's AI tool last summer and, while keen to make a broader assessment after a full season of use, they believe it's been a valuable addition to their injury prevention strategy.

"I believe AI, coupled with the experience levels of those using it, will eventually become a bedrock within clubs' decision-making as data and technology advances," Jordan Milsom, Rangers' head of performance told ESPN. "Given our players had been exposed to one of the longest lockdowns of all [93 days] and the unknowns associated with such prolonged layoffs, we felt investing in such a system may well provide another layer of support for how we managed the players on what would clearly be a challenging season.

"We haven't used the system long enough compare season-to-season analysis, and it's important to understand we are a department that is data-informed and not data-driven. But it is my opinion that if such systems are used in this way, it can have many positive benefits."

Rangers manager Steven Gerrard has praised the club's fitness and sports science department, saying in December that the team were enabling his players to "hit top numbers," and Milson says that the AI data is helping to inform player rotation, even to the extent of highlighting which players should be substituted during games.

"All of our GPS and heart rate training load data from sessions and games is uploaded automatically into the Zone7 system," Milsom said. "The platform digests this, performs its modelling and provides us with risk alerts each day for players.

"Generally, there would be 1-2 players who may be flagged [for further monitoring]. Sometimes, these flags relate to overload -- other times it's under-load. This allows us to have a deeper dive into why specifically they are at risk. This information will feed into our general staff discussions to determine if any further areas support this information. As we typically compete every 3-4 days, if risk is associated with overload, I can often use that information to help support in-game substitutions as a means of maximising player availability, whilst potentially reducing risk through reduced minutes if and when possible."

The key to the success of the AI tool is the amount of data Zone7 are able to upload and analyse. While Brown stresses that "nobody ever sees your data. We don't own it and we're not allowed to retain a copy of it, post-relationship, so it's very strict," the volume of information provided by each client club is used to create a huge database that then enables the programme to predict injury risk.

"We can use 200 million hours of football data because we are working with 50-60 clients," Brown said. "As a result, we have 50-60 times more data than a typical team has, so the data set is very large. But what is important is that it's not just the injury in the sense of the date it occurred and what happened, it is every single day of training and games and medical data leading to the injury, going back as much as a year prior.

"That amount of information gives us the ability to look at the daily data leading to an incident and, using AI and deep learning, to find patterns that repeat themselves before hamstring injuries or groin injuries or knee injuries happen. That's how it works.

"If you are trying to forecast an event, which is an injury, you need to have a big database of incidents. A typical team would have something like 30-40 incidents a year for a squad, so multiply that by several years of historical data."

1:17

The Gab and Juls show analyse Liverpool's loss to Brighton and look forward to their next game against Man City.

ESPN has spoken to people in sports science who believe that AI is a positive innovation if used alongside existing methods. "Their results are impressive," said one sports scientist, who has worked with several Premier League clubs in the past and spoke on condition of not being named. "The issue is the level of individualisation with injury results is high, so lots of variant data only gives you a small answer. Therefore, it definitely has to be a blended approach."

Zone7's AI tool is not restricted to sports. In tandem with Garmin wearable devices and Zone7, medical staff in Israel are having their health and well-being monitored during the COVID-19 pandemic and there is a similar project with a major hospital in New York City. There are also projects ongoing with military and special forces. In football, however, Getafe are the best example of AI being used successfully to improve the fitness record of a team, as explained by head of performance Vidal.

"It would take 200 people all day to analyse the data, but with this, I get the recommendations within minutes." Vidal said. "We use our own high-quality ultrasound to clinically to evaluate players that show predefined risk indications. After starting to use Zone7, some players would report feeling fine despite the engine identifying immediate risk for them.

"In many cases, our ultrasound tests confirmed muscular damage, allowing us to address this before the injury occurred. These players could have sustained injury but for the AI detection."

Cruyff, now coaching in China with Shenzhen FC, believes AI can become a key component for teams, but he makes clear that AI alone cannot be regarded as the silver bullet to prevent all injuries.

"It's not a deciding tool," he said. "You can see a risk of injury and decide to take the risk or not. It's part of the modernisation of sport. You have so many things -- video analysts, GPS tracking devices -- and I think this is a part that maybe we missed, but it is coming, little by little."

See the original post:

Soccer looks to AI for an edge: Could an algorithm really predict injuries? - ESPN

3 Tips to Find a Good AI Partner for Your Recovery – Entrepreneur

May7, 20204 min read

Opinions expressed by Entrepreneur contributors are their own.

Savvy business owners have already begun to prepare for the new normal,taking advantage of the opportunity to arm themselves with new tools for the future. Thats not to say recovery will be easy for everyone,or even that everyone will recover. In April, Main Street America reported that 7.5 million small businesses could close permanently within five months. Nearly half of those businesses could close permanently in just two months, especially if they don't receive financial assistance.

Businesses can't expect to jump back into the drivers seat and pick up where they left off. Future success demands present growth, and in a future filled with smarter technologies, artificial intelligence stands out as the most vital investment for businesses of all sizes.

Tight budgets currently prevent many founders from pursuing their vision. For many, scraping by from one day to the next counts as a win. This state of affairs can't continue for long, though. Something has to give. Businesses that attempt to hold tight instead of pressing forward will find their grip slipping as a new and tougher market lets them fall.

Rather than take a conservative wait-and-see approach, entrepreneurs should do what they do best in these situations:innovate, explore and challenge the status quo. Tools like artificial intelligence empower even the smallest businesses to scale their capabilities beyond what they could accomplish on their own, enabling them to get more out of limited resources.

What better way to survive and thrive than to implement solutions that yield better returns on smaller investments?

Related: 6 AI Business Tools for Entrepreneurs on a Budget

During times of stress and complexity, entrepreneurs dont have to go it alone. Artificial intelligence tools provided by competent and helpful partners can help businesses do more with their limited budgets and push through any challenge. Finding the right partner isnt always easy, but with a little preparation and some digging, every company can identify and implement an AI solution to make life a little easier.

Check out these helpful tips to find the right AI partner:

AI tools remain decades away from human-level intelligence. Most business owners can perform all the same tasks an AI tool can perform. The difference is thatcompetent AI can help companies prioritize and streamline workflows, saving humans time as the tools take care of the details.

Related: How Entrepreneurs Can Use AI to Boost Their Business

CureMetrix, an AI business that operates in the medical industry, cautions radiology professionals against the dangers of burnout. With so many important decisions to make, people can quickly succumb to analysis paralysis if left unchecked. When evaluating potential AI partners, look for someone who can help the business and its workers prioritize and manage workloads.

When vetting partners, look into which tool will be most beneficial for you. As a branch of AI, machine learning involves teaching systems to learn from data sets and make independent decisions based on that information. Artificial intelligence includes a host of functions, machine learning included, so business owners should weigh their needs against any offer from a potential partner.

Not sure what type of solution to go for? Business AI provider D-Labs put together thishelpful guide on how to evaluate AI solutions for specific business needs.

Businesses can use AI for all sorts of cool things. Right now, though, most companies cant afford to splurge on luxuries. Microsoft highlights a few different uses of AI and clarifies exactly how different implementations impact the businesses using them.

For example, a business with interactive customer-service AI could answer basic questions and provide simple services without the need for human intervention. Chatbots are a popular version of this type of AI. Customer relationship management (CRM) solutions empower companies to track and manage communications to maximize time spent on the most promising leads.

Microsoft also mentions the benefits of cybersecurity AI, which may not translate directly to revenue but can save business owners thousands of dollars in avoided mischief. With hacker activity up, cybersecurity investments may be prudent for companies that can afford them.

Related: 3 Ways You Can Use Artificial Intelligence to Grow Your Business Right Now

Why wait until the pandemic passes to start looking into smarter tools? By adding the right technologies and partners now, business owners can get ahead of the curve, equippedwith the ability to earn more money with fewer resources. No one knows how long the downturn in the economy will last, so the sooner businesses invest in themselves, the more impactful the returns will be.

See original here:

3 Tips to Find a Good AI Partner for Your Recovery - Entrepreneur

Ultria Unveils Orbit AI, the Latest Addition to Its AI Powered CLM Capabilities – The Trentonian

PRINCETON, N.J., Oct. 25, 2019 /PRNewswire/ -- Ultria, a leading provider of Enterprise Contract Lifecycle Management in the cloud, unveiled ULTRIA ORBIT AI, a next-generation, Artificial Intelligence solution, expanding beyond the AI-powered Contract Management portfolio.

"Orbit AI is a complementary solution for our contract management application,"Arthur Raguette, Executive Vice President, Ultria said. "We are proud to introduce Orbit as a part of our contract management suite and to provide additional value to our customer's CLM experience. We expect Orbit to be a game-changer and take contract management across the horizon."

Ultria Orbit is designed to assist users across the contract lifecycle, from drafting and assembling contracts, streamlining work flows, to post-contract compliance and obligations management. Orbit's abilities to extract metadata and identify clause match conditions are powered by proprietary Artificial Intelligence algorithms.

Orbit embodies three different personas, a Guide, an Explorer, and a Protector, all powered by Ultria's Artificial Intelligence capabilities to streamline your contract management journey.

Orbit Guidesteers through key stages of intake request, authoring, assembling, and negotiating.

Orbit Explorerworks as an intelligent companion for more intuitive searching with more accurate results and faster reporting and analytics.

Orbit Protectorsafeguards organizations from the complexities of a rapidly shifting regulatory landscape, ever-increasing commercial compliance risks and post-award commercial and compliance management.

Read more about Ultria Orbit hereand harnessOrbit's AI capabilities to enhance your team's performance and transform your contracts into dynamic, intuitive, and smart legal documents for informed decision making.

About Ultria:

Ultria develops and licenses Artificial Intelligence powered applications, including its flagshipContract Lifecycle Management solutionfor the enterprise. Ultria CLM is a proven, scalable, SaaS-deployed Contract Lifecycle Management system that leverages Artificial Intelligence and Machine Learning to be robustly and rapidly provisioned in today's complex business landscapes. Ultria Orbit, Ultria's Artificial Intelligence enables teams to analyze contract terms and extract metadata from any contract. For information on Ultria, visitwww.ultria.com.

Press ContactRewaKulkarni, Marketing and Public Relations, UltriaEmail: rewa.kulkarni@ultria.com

Related Images

say-hello-to-ultria-orbit-ai.pngSay Hello to Ultria Orbit AI

Here is the original post:

Ultria Unveils Orbit AI, the Latest Addition to Its AI Powered CLM Capabilities - The Trentonian

AI will help us download meeting notes to our brains by 2030 – VentureBeat

The internet is overflowing with tips on how to hack your health. From increasing cognitive function by drinking butter-spiked coffee to tracking sleep, stress, and activity levels with increasingly sophisticated fitness wearables, ours is a culture obsessed with optimizing performance. Combining this ethos with recent breakthroughs in artificial intelligence, its practically inevitable that the next frontier in achieving superhuman status lies in the rapidly developing field of brain augmentation.

Artificial intelligence has already proven its value in making software more intuitive and user friendly. From voice-activated personal assistants like Alexa and Siri becoming the new norm, to smarter app authentication through facial recognition technology, we have reached the point where people are starting to trust that the machines are here to improve our lives. The science fiction based fear of bots taking over is being put to rest as consumers embrace the ease and enhanced security that AI brings to our daily devices. Now that it has nestled itself comfortably inside our smartphones, scientists are aiming higher with the next device hack: the human brain.

Visionary entrepreneurs including Elon Musk and Bryan Johnson have teamed up with scientists around the world to make brain augmentation a reality sooner than you may have thought possible. Simply put, the goal is to enhance intelligence and repair damaged cognitive abilities through brain implants. Duke University senior researcher Mikhail Lebedev, who recently published a comprehensive collection of 150 brain augmentation research papers and articles, is confident that brain augmentation will be an everyday reality by 2030.

Lebedevs main focus of research is developing a device that can be fully implanted in the brain. Creating a power source and wireless communication system is a huge challenge, but one that Elon Musk is also working on. Musk made headlines earlier this year with the launch of Neuralink, a company working on the development of what science fiction fans refer to as neural lace, or the merging of the human brain with software to optimize output of both biological and technological functioning. Musk hopes to offer a new treatment for severe brain traumas, including stroke and cancer lesions, in about four years.

With Neuralink is still in its early stages, other Silicon Valley heavy hitters are eager to crack the code of brain augmentation. Braintree founder Bryan Johnson invested more than $100M of personal funding to launch Kernel, a startup staffed by neuroscientists and engineers working to reverse the effects of neurodegenerative diseases such as Parkinsons through the creation of a neuroprosthetic in the form of a tiny embeddable chip. Scientists admit that there is much research on how neurons function and interact that needs to happen before neural code can be written by computers, but the resources and attention garnered by some of todays brightest entrepreneurs is sure to accelerate the process.

While we wait for technology to advance to the level of creating a fully implantable brain enhancement device, the short term breakthroughs we can expect to see from AI brain augmentation revolve around sensory augmentation.

The use of electronic stimuli to trigger the brain into producing artificial sensations has huge possibilities for improving damaged cognitive functioning. Vision could be triggered for the blind to experience sight for the first time. Sensory touch could be stimulated in paralysed limbs. And cognitive functions that tend to degenerate with age, such as memory, could be optimized.

The implications are even larger than repairing cognitive functioning, though. In 2013, Miguel Nicolelis, a neurobiologist at Duke University, successfully led an experiment demonstrating a direct communication linkage between brains in rats. This first successful brain-to-brain interface allowed rats to electronically share information on how to respond to stimuli and the implications for humans could be staggering. From sharing memories to information, altering our shared consciousness is a more far-flung but nevertheless attainable goal of AI. Imagine all the collective suffering in office conference rooms that could be eliminated if meetings could be directly downloaded to our brains!

The field of AI based brain augmentation represents the biggest evolutionary step forward in mankinds history. Creating technologies to augment and enhance human intelligence holds the promise of eliminating diseases and providing a higher quality of life through optimizing, well, everything. Just think: the smartphone was just a crazy idea until the iPhone hit the market ten years ago. Now 44 percent of the worlds population owns a smartphone, along with the ability to expand its computing powers exponentially be connecting to the cloud.

Famed futurist and Google executive Ray Kurzweil predicts that by the 2030s, nanobots will enter our brains via capillaries, providing a fully immersive virtual reality that connects our neocortex to the cloud, expanding our brain power similar to how our smartphones tap into the cloud for outsized computing power.

If Kurzweils incredible track record of predicting emerging technologies is any indicator hes been right about 86 percent of his predictions since the early 90s then we can expect to add a whole new meaning to the phrase head in the clouds. Were living into an exciting age where what was once science fiction is becoming reality, and having a head in the clouds will no longer mean being lost in daydreams, but plugged into the enhanced intelligence of a superbrain.

Andrew DiCosmo is the CTO for Blackspoke, a company that specializes in IT consulting to the Federal Government.

See the original post:

AI will help us download meeting notes to our brains by 2030 - VentureBeat

Teaming with AI: How Microsoft is taking on Zoom on virtual background front – The Financial Express

When governments the world over announced lockdowns, the hunt for best collaboration and video-calling apps had begun for most users. There were video calling apps for funHousepartyand then there were business apps. But competition in the space was limited. Zoom captured a large share of the market with its user interface and accessible features. The disaster that followed in terms of the company trying to keep up with demand and buying Chinese servers gave space to the likes of Microsoft and Google to add more users. But as work from home becomes a norm, and people get attuned to living with video calling apps, companies are incorporating more features to keep their userbase. One of the biggest highlights for all apps has been the use of artificial intelligence and machine learning to attract users. The latest addition to this is Microsoft.

What has Microsoft introduced?Microsoft last week announced features that will help users enable team mode, where they can sit together in a different environment. So, with a virtual background, you can see everyone sitting right in front of you in a classroom, library or coffee house setting, thereby making the whole experience more personal. Microsoft is also trying to incorporate a feature which allows you to adjust brightness and other parameters of the video.

How is it different from virtual backgrounds?Zoom has had virtual backgrounds for long now. Microsoft is a late entrant, but the concept is the same. When Zoom uses virtual background, it often renders the depth of the image to superimpose other backgrounds on it. The machine-learning algorithm then identifies the human component and changes the rest. The technology is not perfect, adjust the camera too fast, and it will not work. In this case, Microsoft is using the same technology to extract you from the image and put you in the same room along with friends and colleagues sitting behind a desk or a table. That way you can see all the participants in one window.

What is Google Meet doing?Google is using AI differently. Instead of using it for video, it is using the technology to cut out background noise. This active noise filtering means that you can only hear the sound of the speaker, and every other sound gets muzzled.

Techsplained @FE features weekly on Mondays. For queries, mail us at ishaan.gera@expressindia.com

Get live Stock Prices from BSE, NSE, US Market and latest NAV, portfolio of Mutual Funds, calculate your tax by Income Tax Calculator, know markets Top Gainers, Top Losers & Best Equity Funds. Like us on Facebook and follow us on Twitter.

Financial Express is now on Telegram. Click here to join our channel and stay updated with the latest Biz news and updates.

Follow this link:

Teaming with AI: How Microsoft is taking on Zoom on virtual background front - The Financial Express

Weird AI illustrates why algorithms still need people – The Next Web

These days, it can be very hard to determine where to draw the boundaries around artificial intelligence. What it can and cant do is often not very clear, as well as where its future is headed.

In fact, theres also a lot of confusion surrounding what AI really is. Marketing departments have a tendency to somehow fit AI in their messaging and rebrand old products as AI and machine learning. The box office is filled with movies about sentient AI systems and killer robots that plan to conquer the universe. Meanwhile, social media is filled with examples of AI systems making stupid (and sometimes offending) mistakes.

If it seems like AI is everywhere, its partly because artificial intelligence means lots of things, depending on whether youre reading science fiction or selling a new app or doing academic research, writes Janelle Shane inYou Look Like a Thing and I Love You, a book about how AI works.

Shane runs the famous blogAI Weirdness, which, as the name suggests, explores the weirdness of AI through practical and humorous examples. In her book, Shane taps into her years-long experience and takes us through many examples that eloquently show what AIor more specificallydeep learningis and what it isnt, and how we can make the most out of it without running into the pitfalls.

While the book is written for the layperson, it is definitely a worthy read for people who have a technical background and even machine learning engineers who dont know how to explain the ins and outs of their craft to less technical people.

In her book, Shane does a great job of explaining how deep learning algorithms work. From stacking up layers of artificial neurons, feeding examples, backpropagating errors, using gradient descent, and finally adjusting the networks weights, Shane takes you through the training ofdeep neural networkswith humorous examples such as rating sandwiches and coming up with knock-knock whos there? jokes.

All of this helpsunderstand the limitsand dangers of current AI systems, which has nothing to do with super-smart terminator bots who want to kill all humans or software system planning sinister plots. [Those] disaster scenarios assume a level of critical thinking and a humanlike understanding of the world that AIs wont be capable of for the foreseeable future, Shane writes.She uses the same context to explain some of the common problems that occur when training neural networks, such as class imbalance in the training data,algorithmic bias, overfitting,interpretability problems, and more.

Instead, the threat of current machine learning systems, which she rightly describes asnarrow AI, is to consider it too smart and rely on it to solve a problem that is broader than its scope of intelligence. The mental capacity of AI is still tiny compared to that of humans, and as tasks become broad, AIs begin to struggle, she writes elsewhere in the book.

AI algorithms are also very unhuman and, as you will see inYou Look Like a Thing and I Love You, they often find ways to solve problems that are very different from how humans would do it. They tend to ferret out the sinister correlations that humans have left in their wake when creating the training data. And if theres a sneaky shortcut that will get them to their goals (such as pausing a game to avoid dying), they will use it unless explicitly instructed to do otherwise.

The difference between successful AI problem solving and failure usually has a lot to do with the suitability of the task for an AI solution, Shane writes in her book.

As she delves into AI weirdness, Shane sheds light on another reality about deep learning systems: It can sometimes be a needlessly complicated substitute for a commonsense understanding of the problem. She then takes us through a lot of other overlooked disciplines of artificial intelligence that can prove to be equally efficient at solving problems.

InYou Look Like a Thing and I Love You, Shane also takes care to explain some of the problems that have been created as a result of the widespread use of machine learning in different fields. Perhaps the best known isalgorithmic bias, the intricate imbalances in AIs decision-making which lead to discrimination against certain groups and demographics.

There are many examples where AI algorithms, using their own weird ways, discover and copy the racial and gender biases of humans and copy them in their decisions. And what makes it more dangerous is that they do it unknowingly and in an uninterpretable fashion.

We shouldnt see AI decisions as fair just because an AI cant hold a grudge. Treating a decision as impartial just because it came from an AI is known sometimes as mathwashing or bias laundering, Shane warns. The bias is still there, because the AI copied it from its training data, but now its wrapped in a layer of hard-to-interpret AI behavior.

This mindless replication of human biases becomes a self-reinforced feedback loop thatcan become very dangerouswhen unleashed in sensitive fields such as hiring decisions, criminal justice, and loan application.

The key to all this may be human oversight, Shane concludes. Because AIs are so prone to unknowingly solving the wrong problem, breaking things, or taking unfortunate shortcuts, we need people to make sure their brilliant solution isnt a head-slapper. And those people will need to be familiar with the ways AIs tend to succeed or go wrong.

Shane also explores several examples in which not acknowledging the limits of AI has resulted in humans being enlisted to solve problems that AI cant. Also known asThe Wizard of Oz effect, this invisible use of often-underpaid human bots is becoming a growing problem as companies try to apply deep learning to anything and everything and are looking for an excuse to put an AI-powered label on their products.

The attraction of AI for many applications is its ability to scale to huge volumes, analyzing hundreds of images or transactions per second, Shane writes. But for very small volumes, its cheaper and easier to use humans than to build an AI.

All the egg-shell-and-mud sandwiches, the cheesy jokes, the senseless cake recipes, the mislabeled giraffes, and all the other weird things AI does bring us to a very important conclusion. AI cant do much without humans, Shane writes. A far more likely vision for the future, even one with the widespread use of advanced AI technology, is one in which AI and humans collaborate to solve problems and speed up repetitive tasks.

While we continuethe quest toward human-level intelligence, we need to embrace current AI as what it is, not what we want it to be. For the foreseeable future, the danger will not be that AI is too smart but that its not smart enough, Shane writes. Theres every reason to be optimistic about AI andevery reason to be cautious. It all depends on how well we use it.

This article was originally published by Ben Dickson on TechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech and what we need to look out for. You can read the original article here.

Published July 18, 2020 13:00 UTC

Read the original post:

Weird AI illustrates why algorithms still need people - The Next Web

Big Tech Is Battling Cyberthreats With AI – Motley Fool

Cybercrime is a worldwide epidemic, and frequent headlines attest to the need for novel solutions. Research firm Cybersecurity Ventures estimates that the global cost of cybercrime will reach $6 trillion annually by 2021, double the $3 trillion cost in 2015. It further reports that spending on products and services to defend against cybercrime will exceed $1 trillion over the next five years.With the number of hacks and infiltrations on the rise, there will be a growing shortage of experts to fill cybersecurity positions that will result in 3.5 million open positions in the field by 2021.

The most recent example of a widespread threat is the ransomware WannaCry, which spread to 150 countries and over 200,000 organizations. Infected computers were encrypted while hackers demanded payments from users for the release of their data.

Experts are increasingly turning to artificial intelligence (AI) in the fight against cybercrime for its ability to analyze data more quickly than its human counterparts and potentially block malicious code before it gains access or causes significant damage. Microsoft Corporation (NASDAQ:MSFT) just joined a growing number of tech companies that are banking on AI to bridge the gap in the battle for the digital domain.

Hexadite can respond to cyberthreats in minutes. Image source: Pixabay.

Microsoft has acquired cybersecurity start-up Hexadite, which specializes in the use of AI to identify and respond to cyberattacks. This acquisition will allow the company to expand its existing capabilities and portfolio of security products. Most cyberattacks are the result of sophisticated algorithms running protocols to identify vulnerabilities and exploit them. Hexadite claims that its AI-based solution reduces the time necessary to respond to cyberincidents by 95%. Its system can launch multiple "probes" and identify breaches in real time, allowing either a human or automated response to begin within minutes.

Microsoft has acquired a number of start-ups in recent years in an effort to increase the security of its Azure cloud-computing services.In 2014, Microsoft picked up enterprise-security company Aorato, which uses machine learning to create a behavior-monitoring firewall to quickly identify anomalies in data networks. By continuously reviewing and updating its interpretation of normal user behavior, the system can detect unusual or suspicious activity within a company's network before threats are realized.This latest move by the software giant is part of a broader trend that's changing the way companies combat cyberincursions.

E-commerce giant Amazon.com, Inc. (NASDAQ:AMZN) acquired AI-based cybersecurity company Harvest.ai, which uses analytics to spot unusual behavior by users and within key business systems. The system determines the importance of, and assigns values to, critical documents, data, and source code to detect and eliminate data breaches. The company's flagship MACIE system provides real-time monitoring and detects unauthorized access to prevent targeted attacks and data leaks. It is thought that Amazon used Harvest.ai to strengthen the security of its Amazon Web Services cloud-computing service.

Microsoft will roll out Hexadite to commercial Windows 10 customers. Image source: Getty Images.

Tech giant International Business Machines Corporation (NYSE:IBM) embarked on a mission in mid-2016 to train its AI-based cognitive-computing system, Watson, in cybersecurity. The company partnered with a number of universities in a yearlong research project to gain access to data on previous security threats. By processing this data, Watson would be able to discover similar events based on the information contained in the volumes of security data. IBM later expanded the project to more than 40 organizations from a variety of industries to further Watson's capabilities in the area.

In early 2017, the company announced that after ingesting over 1 million security documents, Watson for Cyber Security would be available to its customers to enhance their cybersecurity proficiency.

Alphabet's (NASDAQ:GOOG) (NASDAQ:GOOGL) Google has been a pioneer in AI and has been at the forefront of numerous AI technologies, from autonomous driving to designing a chip that may be the future of AI systems. One of the more intriguing developments in its AI research may have broad implications in the field of cybersecurity.

Google detailed in a research paper how two AI systems were tasked with communicating with each other, while preventing a third from discovering the content of their communications. The two systems, called Bob and Alice, were given a security key that was not provided to Eve, the third system. While they were not trained regarding coding, Bob and Alice were able to devise a sophisticated encryption protocol that stymied Eve. This could have practical applications for cybersecurity in the future.

As cybercriminals and their techniques become more sophisticated, so, too, must the methods used to defend against them. Since complex algorithms are being used to perpetrate these attacks, it seems only fitting that artificially intelligent software systems be used to secure against the intrusions. While there is probably no silver bullet, each new advancement adds another weapon in the battle for cybersecurity.

Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Teresa Kersten is an employee of LinkedIn and is a member of The Motley Fool's board of directors. LinkedIn is owned by Microsoft. Danny Vena owns shares of Alphabet (A shares) and Amazon. Danny Vena has the following options: long January 2018 $640 calls on Alphabet (C shares) and short January 2018 $650 calls on Alphabet (C shares). The Motley Fool owns shares of and recommends Alphabet (A shares), Alphabet (C shares), and Amazon. The Motley Fool has a disclosure policy.

Read more:

Big Tech Is Battling Cyberthreats With AI - Motley Fool

More Bad News for Gamblers AI WinsAgain – HPCwire (blog)

AI-based poker playing programs have been upping the ante for lowly humans. Notably several algorithms from Carnegie Mellon University (e.g. Libratus, Claudico, and Baby Tartanian8) have performed well. Writing in Science last week, researchers from the University of Alberta, Charles University in Prague and Czech Technical University report their poker algorithm DeepStack is the first computer program to beat professional players in heads-up no-limit Texas holdem poker.

Sorting through the firsts is tricky in the world of AI-game playing programs. What sets DeepStack apart from other programs, say the researchers, is its more realistic approach at least in games such as poker where all factors are never full known think bluffing, for example. Heads-up no-limit Texas holdem (HUNL) is a two-player version of poker in which two cards are initially dealt face down to each player, and additional cards are dealt face-up in three subsequent rounds. No limit is placed on the size of the bets although there is an overall limit to the total amount wagered in each game.

Poker has been a longstanding challenge problem in artificial intelligence, says Michael Bowling, professor in the University of Albertas Faculty of Science and principal investigator on the study. It is the quintessential game of imperfect information in the sense that the players dont have the same information or share the same perspective while theyre playing.

Using GTX 1080 GPUs and CUDA with the Torch deep learning framework, we train our system to learn the value of situations, says Bowling on an NVIDIA blog. Each situation itself is a mini poker game. Instead of solving one big poker game, it solves millions of these little poker games, each one helping the system to refine its intuition of how the game of poker works. And this intuition is the fuel behind how DeepStack plays the full game.

In the last two decades, write the researchers, computer programs have reached a performance that exceeds expert human players in many games, e.g., backgammon, checkers, chess, Jeopardy!, Atari video games, and go. These successes all involve games with information symmetry, where all players have identical information about the current state of the game. This property of perfect information is also at the heart of the algorithms that enabled these successes, write the researchers.

We introduce DeepStack, an algorithm for imperfect information settings. It combines recursive reasoning to handle information asymmetry, decomposition to focus computation on the relevant decision, and a form of intuition that is automatically learned from self-play using deep learning.

In total 44,852 games were played by the thirty-three players with 11 players completing the requested 3,000 games, according to the paper. Over all games played, DeepStack won 492 mbb/g. This is over 4 standard deviations away from zero, and so, highly significant. According to the authors, professional poker players consider 50 mbb/g a sizable margin. Using AIVAT to evaluate performance, we see DeepStack was overall a bit lucky, with its estimated performance actually 486 mbb/g.

(For those of us less prone to take a seat at the Texas holdem poker table, mbb/g equals milli-big-blinds per game or the average winning rate over a number of hands, measured in thousandths of big blinds. A big blind is the initial wager made by the non-dealer before any cards are dealt. The big blind is twice the size of the small blind; a small blind is the initial wager made by the dealer before any cards are dealt. The small blind is half the size of the big blind.)

Its an interesting paper. Game theory, of course, has a long history and as the researchers note, The founder of modern game theory and computing pioneer, von Neumann, envisioned reasoning in games without perfect information. Real life is not like that. Real life consists of bluffing, of little tactics of deception, of asking yourself what is the other man going to think I mean to do. And that is what games are about in my theory. One game that fascinated von Neumann was poker, where players are dealt private cards and take turns making bets or bluffing on holding the strongest hand, calling opponents bets, or folding and giving up on the hand and the bets already added to the pot. Poker is a game of imperfect information, where players private cards give them asymmetric information about the state of game.

According to the paper, DeepStack algorithm is composed of three ingredients: a sound local strategy computation for the current public state, depth-limited look-ahead using a learned value function to avoid reasoning to the end of the game, and a restricted set of look-ahead actions. At a conceptual level these three ingredients describe heuristic search, which is responsible for many of AIs successes in perfect information games. Until DeepStack, no theoretically sound application of heuristic search was known in imperfect information games.

The researchers describe DeepStacks architecture as a standard feed-forward network with seven fully connected hidden layers each with 500 nodes and parametric rectified linear units for the output. The turn network was trained by solving 10 million randomly generated poker turn games. These turn games used randomly generated ranges, public cards, and a random pot size. The flop network was trained similarly with 1 million randomly generated flop games.

Link to paper: http://science.sciencemag.org/content/early/2017/03/01/science.aam6960.full

Link to NVIDIA blog: https://news.developer.nvidia.com/ai-system-beats-pros-at-texas-holdem/

See original here:

More Bad News for Gamblers AI WinsAgain - HPCwire (blog)

AI Trust Remains an Issue in the Life Sciences – EnterpriseAI

The rise of AI machine and deep learning in life sciences has stirred the same excitement and skepticism as in other fields of scientific research.

AI is mostly used in two areas of life sciences. The first is embedded in instruments such as cryo-electron microscopes, where AI tools assists in feature recognition. Those tools are mostly hidden from users.

The other application includes high profile projects such as the National Cancer Institute-Department of Energy effort known as CANDLE (NCI-DoEs CANcer Distributed Learning Environment), which has access to supercomputing capacity and plenty of AI expertise.

AI is, of course, being used against COVID-19 at many large labs. Theres not much in between.

AI nevertheless remains in its infancy elsewhere in life sciences. The appetite for its use is high but so are the early obstaclesnot least the hype which has already ticked off clinicians. Another stumbling block is uneven data quality, non-optimal compute infrastructure and limited AI expertise.

On the plus side, practical pilot programs are emerging.

Sister website HPCwirespoke with Fernanda Foertter, senior scientific consultant at BioTeam, the research computing consultant. An AI specialist, Foertter joined Bioteam from Nvidia where she worked on AI for healthcare. Foertter also did a stint at Oak Ridge National Laboratory as a data scientist working on scalable algorithms for data analysis and deep learning.

What got the AI ball rolling at research agencies was natural language processing (NLP), especially within the Energy Departments supercomputer initiatives, Foertter noted. We have theCANDLEproject whose three pilots had the main, basic AI applications [like] NLP, using AI for accelerating molecular dynamics and drug discovery.

NLP is actually working really well, [and] the molecular dynamics is working really well, Foertter added. The drug discovery issue was they didn't have the right data to begin with. So, they're still generating data, in vivo data, for that.

The application of HPC to AI also has been used extensively in COVID-19 research. It seems reasonable to expect those efforts will not only bear fruit against the pandemic, but also generate new approaches for using HPC cum AI in life sciences research.

Still, making AI work in clinical settings remains challenging.

When Foertter joined Nvidia in 2018), medical imaging applications were emerging. Those expectations have since been tempered. We went from talking about, Can we discover pneumonia? Can we discover tumors? to talking about being able to grade tumors, which is much more refined. The one application I think everybody wishes would happen really, really quickly, but hasn't really materialized, is digital pathology, said Foertter.

The workflow remains challenging. Somebody has to go through a picture that has a very, very high pixel number, she continued. They choose a few points and they have experience, and the miss rate is anywhere between 30 percent to 40 percent. That means you can send somebody to pathology and they're going to miss that you have cancer.

Large image formats also have slowed digital pathology. To do any sort of convolution neural network on a really large image to train it would just break it, Foertter explained. Just the memory size is really hard.

AI hype also has turned off clinicians, particularly claims the technology would make them obsolete. There's a lot of animosity [among] physicians. The whole AI thing was kind of sold as if it could replace a lot of folks, an impression that AI vendors moved quickly to correct.

Hence, trust and the ability to see and analyze data used for training and inference remain issues within the medical the medical community.

AI is still seen as a black box, Foertter stressed.

Read John Russells full report detailing AIs impact on the life sciences here.

Related

About the author: George Leopold

George Leopold has written about science and technology for more than 30 years, focusing on electronics and aerospace technology. He previously served as executive editor of Electronic Engineering Times. Leopold is the author of "Calculated Risk: The Supersonic Life and Times of Gus Grissom" (Purdue University Press, 2016).

Read the original post:

AI Trust Remains an Issue in the Life Sciences - EnterpriseAI

Emory students advance artificial intelligence with a bot that aims to serve humanity – SaportaReport

A team of six Emory computer science students are helping to usher in a new era in artificial intelligence. Theyve developed a chatbot capable of making logical inferences that aims to hold deeper, more nuanced conversations with humans than have previously been possible. Theyve christened their chatbot Emora, because it sounds like a feminine version of Emory and is similar to a Hebrew word for an eloquent sage.

The team is now refining their new approach to conversational AI a logic-based framework for dialogue management that can be scaled to conduct real-life conversations. Their longer-term goal is to use Emora to assist first-year college students, helping them to navigate a new way of life, deal with day-to-day issues and guide them to proper human contacts and other resources when needed.

Eventually, they hope to further refine their chatbot developed during the era of COVID-19 with the philosophy Emora cares for you to assist people dealing with social isolation and other issues, including anxiety and depression.

The Emory team is headed by graduate students Sarah Finch and James Finch, along with faculty advisorJinho Choi, associate professor in the Department of Computer Sciences. The team also includes graduate student Han He and undergraduates Sophy Huang, Daniil Huryn and Mack Hutsell. All the students are members of ChoisNatural Language Processing Research Laboratory.

Were taking advantage of established technology while introducing a new approach in how we combine and execute dialogue management so a computer can make logical inferences while conversing with a human, Sarah Finch says.

We believe that Emora represents a groundbreaking moment for conversational artificial intelligence, Choi adds. The experience that users have with our chatbot will be largely different than chatbots based on traditional, state-machine approaches to AI.

Last year, Choi and Sarah and James Finch headed a team of 14 Emory students that took first place in Amazons Alexa Prize Socialbot Grand Challenge, winning $500,000 for their Emora chatbot. The annual Alexa Prize challenges university students to make breakthroughs in the design of chatbots, also known as socialbots software apps that simplify interactions between humans and computers by allowing them to talk with one another.

This year, they developed a completely new version of Emora with the new team of six students.

They made the bold decision to start from scratch, instead of building on the state-machine platform they developed in 2020 for Emora. We realized there was an upper limit to how far we could push the quality of the system we developed last year, Sarah Finch says. We wanted to do something much more advanced, with the potential to transform the field of artificial intelligence.

They based the current Emora on three types of frameworks to advance core natural language processing technology, computational symbolic structures and probabilistic reasoning for dialogue management.

They worked around the clock, making it into the Alexa Prize finals in June. They did not complete most of the new system, however, until just a few days before they had to submit Emora to the judges for the final round of the competition.

That gave the team no time to make finishing touches to the new system, work out the bugs, and flesh out the range of topics that it could deeply engage in with a human. While they did not win this years Alexa Prize, the strategy led them to develop a system that holds more potential to open new doors of possibilities for AI.

In the run-up to the finals, users of Amazons virtual assistant, known as Alexa, volunteered to test out the competing chatbots, which were not identified by their names or universities. A chatbots success was gauged by user ratings.

The competition is extremely valuable because it gave us access to a high volume of people talking to our bot from all over the world, James Finch says. When we wanted to try something new, we didnt have to wait long to see whether it worked. We immediately got this deluge of feedback so that we could make any needed adjustments. One of the biggest things we learned is that what people really want to talk about is their personal experiences.

Sarah and James Finch, who married in 2019, are the ultimate computer power couple. They met at age 13 in a math class in their hometown of Grand Blanc, Michigan. They were dating by high school, bonding over a shared love of computer programming. As undergraduates at Michigan State University, they worked together on a joint passion for programming computers to speak more naturally with humans.

If we can create more flexible and robust dialogue capability in machines, Sarah Finch explains, a more natural, conversational interface could replace pointing, clicking and hours of learning a new software interface. Everyone would be on a more equal footing because using technology would become easier.

She hopes to pursue a career in enhancing computer dialogue capabilities with private industry after receiving her PhD.

James Finch is most passionate about the intellectual aspects of solving problems and is leaning towards a career in academia after receiving his PhD.

The Alexa Prize deadlines required the couple to work many 60-hour-plus weeks on developing Emoras framework, but they didnt consider it a grind. Ive enjoyed every day, James Finch says. Doing this kind of dialogue research is our dream and were living it. We are making something new that will hopefully be useful to the world.

They chose to come to Emory for graduate school because of Choi, an expert in natural language processing, and Eugene Agichtein, professor in the Department of Computer Science and an expert in information retrieval.

Emora was designed not just to answer questions, but as a social companion.

A caring chatbot was an essential requirement for Choi. At the end of every team meeting, he asks one member to say something about how the others have inspired them. When someone sees a bright side in us, and shares it with others, everyone sees that side and that makes it even brighter, he says.

Chois enthusiasm is also infectious.

Growing up in Seoul, South Korea, he knew by the age of six that he wanted to design robots. I remember telling my mom that I wanted to make a robot that would do homework for me so I could play outside all day, he recalls. It has been my dream ever since. I later realized that it was not the physical robot, but the intelligence behind the robot that really attracted me.

The original Emora was built on a behavioral mathematical model similar to a flowchart and equipped with several natural language processing models. Depending on what people said to the chatbot, the machine made a choice about what path of a conversation to go down. While the system was good at chit chat, the longer a conversation went on, the more chances that the system would miss a social-linguistic nuance and the conversation would go off the rails, diverting from the logical thread.

This year, the Emory team designed Emora so that she could go beyond a script and make logical inferences. Rather than a flowchart, the new system breaks a conversation down into concepts and represents them using a symbolic graph. A logical inference engine allows Emora to connect the graph of an ongoing conversation into other symbolic graphs that represent a bank of knowledge and common sense. The longer the conversations continue, the more its ability to make logical inferences grows.

Sarah and James Finch worked on the engineering of the new Emora system, as well as designing logic structures and implementing related algorithms. Undergraduates Sophy Huang, Daniil Huryn and Mack Hutsell focused on developing dialogue content and conversational scripts for integrating within the chatbot. Graduate student Han He focused on structure parsing, including recent advances in the technology.

A computer cannot deal with ambiguity, it can only deal with structure, Han He explains. Our parser turns the grammar of a sentence into a graph, a structure like a tree, that describes what a chatbot user is saying to the computer.

He is passionate about language. Growing up in a small city in central China, he studied Japanese with the goal of becoming a linguist. His family was low income so he taught himself computer programming and picked up odd programmer jobs to help support himself. In college, he found a new passion in the field of natural language processing, or using computers to process human language.

His linguistic background enhances his technological expertise. When you learn a foreign language, you get new insights into the role of grammar and word order, He says. And those insights can help you to develop better algorithms and programs to teach computers how to understand language. Unfortunately, many people working in natural language processing focus primarily on mathematics without realizing the importance of grammar.

After getting his masters at the University of Houston, He chose to come to Emory for a PhD to work with Choi, who also emphasizes linguistics in his approach to natural language processing. He hopes to make a career in using artificial intelligence as an educational tool that can help give low-income children an equal opportunity to learn.

A love of language also brought senior Mack Hutsell into the fold. A native of Houston, he came to Emorys Oxford College to study English literature. His second love is computer programming and coding. When Hutsell discovered the digital humanities, using computational methods to study literary texts, he decided on a double major in English and computer science.

I enjoy thinking about language, especially language in the context of computers, he says.

Chois Natural Language Processing Lab and the Emora project was a natural fit for him.

Like the other undergraduates on the team, Hutsell did miscellaneous tasks for the project while also creating content that could be injected into Emoras real-world knowledge graph. On the topic of movies, for instance, he started with an IMDB dataset. The team had to combine concepts from possible conversations about the movie data in ways that would fit into the knowledge graph template and generate unique responses from the chatbot. Thinking about how to turn metadata and numbers into something that sounds human is a lot of fun, Hutsell says.

Language was also a key draw for senior Danii Huryn. He was born in Belarus, moved to California with his family when he was four, and then returned to Belarus when he was 10, staying until he completed high school. He speaks English, Belarusian and Russian fluently and is studying German.

In Belarus, I helped translate at my church, he says. That got me thinking about how different languages work differently and that some are better at saying different things.

Huryn excelled in computer programming and astronomy in his studies in Belarus. His interests also include reading science fiction and playing video games. He began his Emory career on the Oxford campus, and eventually decided to major in computer science and minor in physics.

For the Emora project, he developed conversations about technology, including an AI component, and another on how people were adapting to life during the pandemic.

The experience was great, Huryn says. I helped develop features for the bot while I was taking a course in natural language processing. I could see how some of the things I was learning about were coming together into one package to actually work.

Team member Sophy Huang, also a senior, grew up in Shanghai and came to Emory planning to go down a pre-med track. She soon realized, however, that she did not have a strong enough interest in biology and decided on a dual major of applied mathematics and statistics and psychology. Working on the Emora project also taps into her passions for computer programming and developing applications that help people.

Psychology plays a big role in natural language processing, Huang says. Its really about investigating how people think, talk and interact and how those processes can be integrated into a computer.

Food was one of the topics Huang developed for Emora to discuss. The strategy was first to connect with users by showing understanding, she says.

For instance, if someone says pizza is their favorite food, Emora would acknowledge their interest and ask what it is about pizza that they like so much.

By continuously acknowledging and connecting with the user, asking for their opinions and perspectives and sharing her own, Emora shows that she understands and cares, Huang explains. That encourages them to become more engaged and involved in the conversation.

The Emora team members are still at work putting the finishing touches on their chatbot.

We created most of the system that has the capability to do logical thinking, essentially the brain for Emora, Choi says. The brain just doesnt know that much about the world right now and needs more information to make deeper inferences. You can think of it like a toddler. Now were going to focus on teaching the brain so it will be on the level of an adult.

The team is confident that their system works and that they can complete full development and integration to launch beta testing sometime next spring.

Choi is most excited about the potential to use Emora to support first-year college students, answering questions about their day-to-day needs and directing them to the proper human staff or professor as appropriate. For larger issues, such as common conflicts that arise in group projects, Emora could also serve as a starting point by sharing how other students have overcome similar issues.

Choi also has a longer-term vision that the technology underlying Emora may one day be capable of assisting people dealing with loneliness, anxiety or depression. I dont believe that socialbots can ever replace humans as social companions, he says. But I do think there is potential for a socialbot to sympathize with someone who is feeling down, and to encourage them to get help from other people, so that they can get back to the cheerful life that they deserve.

Read more:

Emory students advance artificial intelligence with a bot that aims to serve humanity - SaportaReport

AI bias detection (aka: the fate of our data-driven world) – ZDNet

Here's an astounding statistic: Between 2015 and 2019, global use of artificial intelligencegrew by 270%. It's estimated that85% of Americansare already using AI products daily, whether they now it or not.

It's easy to conflate artificial intelligence with superior intelligence, as though machine learning based on massive data sets leads to inherently better decision-making. The problem, of course, is that human choices undergird every aspect of AI, from the curation of data sets to the weighting of variables. Usually there's little or no transparency for the end user, meaning resulting biases are next to impossible to account for. Given that AI is now involved in everything from jurisprudence to lending, it's massively important for the future of our increasingly data-driven society that the issue of bias in AI be taken seriously.

This cuts both ways -- development in the technology class itself, which represents massive new possibilities for our species, will only suffer from diminished trust if bias persists without transparency and accountability. In one recent conversation, Booz Allen'sKathleen Featheringham, Director of AI Strategy & Training, told me that adoption of the technology is being slowed by what she identifies as historical fears:

Because AI is still evolving from its nascency, different end users may have wildly different understandings about its current abilities, best uses and even how it works. This contributes to a blackbox around AI decision-making. To gain transparency into how an AI model reaches end results, it is necessary to build measures that document the AI's decision-making process. In AI's early stage, transparency is crucial to establishing trust and adoption.

While AI's promise is exciting, its adoption is slowed by historical fear of new technologies. As a result, organizations become overwhelmed and don't know where to start. When pressured by senior leadership, and driven by guesswork rather than priorities, organizations rush to enterprise AI implementation that creates more problems.

One solution that's becoming more visible in the market is validation software.Samasource, a prominent supplier of solutions to a quarter of the Fortune 50, is launching AI Bias Detection, a solution that helps to detect and combat systemic bias in artificial intelligence across a number of industries. The system, which leaves a human in the loop, offers advanced analytics and reporting capabilities that help AI teams spot and correct bias before it's implemented across a variety of use-cases, from identification technology to self-driving vehicles.

"Our AI Bias Detection solution proves the need for a symbiotic relationship between technology and a human-in-the-loop team when it comes to AI projects," says Wendy Gonzalez, President and Interim CEO of Samasource. "Companies have a responsibility to actively and continuously improve their products to avoid the dangers of bias and humans are at the center of the solution."

That responsibility is reinforced by alarmingly high error rates in current AI deployments. One MITstudyfound that "gender classification systems sold by IBM, Microsoft, and Face++" were found to have "an error rate as much as 34.4 percentage points higher for darker-skinned females than lighter-skinned males." Samasource also references a Broward County, Florida, law enforcement program used to predict the likelihood of crime, which was found to "falsely flag black defendants as future criminals (...) at almost twice the rate as white defendants."

The company's AI Bias Detection looks specifically at labeled data by class and discriminates between ethically sourced, properly diverse data and sets that may lack diversity. It pairs that detection capability with a reporting architecture that provides details on dataset distribution and diversity so AI teams can pinpoint problem areas in datasets, training, or algorithms in order to root out biases

Pairing powerful detection tools with a broader understanding of how insidious AI bias can be will be an important step in the early days of AI/ML adoption. Part of the onus, certainly, will have to be on consumers of AI applications, particularly in spheres like governance and law enforcement, where the stakes couldn't possibly be higher.

View post:

AI bias detection (aka: the fate of our data-driven world) - ZDNet

Learn about Artificial Intelligence (AI) | Code.org

NEW AI and Machine Learning Module

Our new curriculum module focuses on AI ethics, examines issues of bias, and explores and explains fundamental concepts through a number of online and unplugged activities and full-group discussions.

AI and Machine Learning impact our entire world, changing how we live and how we work. That's why its critical for all of us to understand this increasingly important technology, including not just how its designed and applied, but also its societal and ethical implications.

Join us to explore AI in a new video series, train AI for Oceans in 25+ languages, discuss ethics, and more!

Learn about how AI works and why it matters with this series of short videos. Featuring Microsoft CEO Satya Nadella and a diverse cast of experts.

Students reflect on the ethical implications of AI, then work together to create an AI Code of Ethics resource for AI creators and legislators everywhere.

We thank Microsoft for supporting our vision and mission to ensure every child has the opportunity to learn computer science and the skills to succeed in the 21st century.

The AI and Machine Learning Module is roughly a five week curriculum module that can be taught as a standalone module or as an optional unit in CS Discoveries. It focuses on AI ethics, examines issues of bias, and explores and explains fundamental concepts

Because machine learning depends on large sets of data, the new unit includes real life datasets on healthcare, demographics, and more to engage students while exploring questions like, What is a problem Machine Learning can help solve? How can AI help society? Who is benefiting from AI? Who is being harmed? Who is involved? Who is missing?

Ethical considerations will be at the forefront of these discussions, with frequent discussion points and lessons around the impacts of these technologies. This will help students develop a holistic, thoughtful understanding of these technologies while they learn the technical underpinnings of how the technologies work.

With an introduction by Microsoft CEO Satya Nadella, this series of short videos will introduce you to how artificial intelligence works and why it matters. Learn about neural networks, or how AI learns, and delve into issues like algorithmic bias and the ethics of AI decision-making.

Go deeper with some of our favorite AI experts! This panel discussion touches on important issues like algorithmic bias and the future of work. Pair it with our AI & Ethics lesson plan for a great introduction to the ethics of artificial intelligence!

Resources to inspire students to think deeply about the role computer science can play in creating a more equitable and sustainable world.

This global AI for Good challenge introduces students to Microsofts AI for Good initiatives, empowering them to solve a problem in the world with the power of AI.

Levels 2-4 use a pretrained model provided by the TensorFlow MobileNet project. A MobileNet model is a convolutional neural network that has been trained on ImageNet, a dataset of over 14 million images hand-annotated with words such as "balloon" or "strawberry". In order to customize this model with the labeled training data the student generates in this activity, we use a technique called Transfer Learning. Each image in the training dataset is fed to MobileNet, as pixels, to obtain a list of annotations that are most likely to apply to it. Then, for a new image, we feed it to MobileNet and compare its resulting list of annotations to those from the training dataset. We classify the new image with the same label (such as "fish" or "not fish") as the images from the training set with the most similar results.

Levels 6-8 use a Support-Vector Machine (SVM). We look at each component of the fish (such as eyes, mouth, body) and assemble all of the metadata for the components (such as number of teeth, body shape) into a vector of numbers for each fish. We use these vectors to train the SVM. Based on the training data, the SVM separates the "space" of all possible fish into two parts, which correspond to the classes we are trying to learn (such as "blue" or "not blue").

[Back to top]

Read more:

Learn about Artificial Intelligence (AI) | Code.org

Ai Weiwei Is Creeping on New York with an Army of Drones, and Instagram Is Loving It – W Magazine

Last year, the artist Ai Weiwei celebrated the the Chinese government returning his passport by putting on no less than four exhibitions in New York, including even a thrift shop in Soho that was in fact stocked with the abandoned belongings of thousands of refugees forced to relocate to a camp on the border of Greece and Macedonia.

Ai's work continues to spotlight sociopolitical crises, of which there is no shortage these days. This week, he unveiled an expansive installation in collaboration with the architects Jacques Herzog and Pierre de Meuron inside the Park Avenue Armory on Manhattan's Upper East Side (this exhibition comes on the heels of the 13 Cate Blanchetts that were projected throughout the cavernous Drill Hall). (It's not the first time Ai has collaboarted with Herzog and de Meuron: They have worked together for the past 15 years on projects like the 2008 Beijing Olympic Stadium which Ai later said he regretted taking part in because the games were "merely a stage for a political party to advertise its glory to the world.")

Hansel & Gretel , as the installation is eerily called, also happens to be interactive, whether visitors like it or not. From the moment they step into Drill Hall, each of their movements is tracked and monitored via drones. Unlike the artist Jordan Wolfson's equally chilling yet slightly more menacing robot , which employed similar technology to lunge at viewers, though, each visitor is then simply projected back onto the installation, as a white light follows them to make sure they won't get lost in the darknessand so they can't avoid the cameras's glare. Still, many of them have taken to throwing up peace signsor, in the case of the artist himself, a middle fingerat the drones. And of course, they're posting about the chilling experience on Instagram . Witness their encounters, here.

Meet the Chameleons of the Art World, aka the Humans of Frieze New York:

Read more:

Ai Weiwei Is Creeping on New York with an Army of Drones, and Instagram Is Loving It - W Magazine

How AI is being used to socially distance audiences at ‘Tenet’ and why Netflix is no threat, according to this movie theater chain boss – MarketWatch

Elizabeth Debicki, left, and John David Washington in a scene from director Christopher Nolan's "Tenet." Melinda Sue Gordon/Associated Press

Sophisticated algorithms are being used by one of Europes biggest movie theater chains to help with social distancing.

Vue International, which has around 230 cinemas in the U.K., Germany, Taiwan, Italy, Poland and other European countries, has been using artificial intelligence to optimize screening times and ismaking adjustmentsto control the flow of audiences into auditoriums.

Tim Richards, who founded privately owned Vue cinemasaround20 years ago, said 10 years worth of data had been fed in computers pre-COVID to decide on the timing and frequency for screening movies.

This has now been adapted to control the flow of customers into the cinemas by staggering screening times. It is being linked withseating softwarethatcocoonscustomers within their family bubbles, or on their own, a safe distance away from other customers.

Read: Heres an overlooked way to play the stuck-at-home trend in the stock market

Richards, speaking at a press briefing on Monday evening,said: It took me 17 years to build the group up to 230 cinemas. What happened just a few months ago was apocalyptic.

We have planned for crises such as a cinema being shut and blockbusters tanking,but not all the cinemas being down. Our big [cost] exposures are studios, people, and rent we were quickly focused on our burn rate and liquidity.

Last month it was reported that Vue was lining up 100 million ($133 million) in additional debt financing. The firm is owned by Alberta Investment Management Corporation and pension fund Omers. Richards and other managers hold a 27% stake.

Vue has been slowly reopening its cinemas around Europe over the past few weeks.

We have been using AI to help determine what is played,at what screen,and at which cinemas[to optimize revenues], he said. Our operating systems have been tweaked to social distance customers. It recognizes if you are with family and it will cocoon you. At the moment we are probably able to use 50% of cinemascapacities.

We can control the number of people in the foyer at any one time. Crowds would not be conducive to helping customersfeel comfortablecoming back. Every member of staff went through two days of safety training.

Richards said when he did reopen his movie theaters there was pent-up demand from customers but no new movies to screen.

We still managed at 50% run rate with classic movies that were not onlyalready availableon streaming services but on terrestrial televisionas well. Peoplejustwanted to get out of their homes and have some kind of normalcy.

Christopher Nolans complex thriller Tenet is the first major new film to be released and Richards said: We are seeing Tenet performing at the same levels as Inception and Interstellar did which has been amazing.

It will be a bumpy road in some areas but we expect a return to normalcy in six months it will take a couple of months to get people comfortable again with their positions.

He said entertainment giant Disney DIS, -1.58% has a strong line up of movie theater releases, despite placing Mulan direct to its streaming channel.

Fears that streaming service Netflix NFLX, -4.90% is a threat to the industry,as movie lovers become used to watching films at home,are unfounded, he said.

Opinion:Is Mulan worth $30? The answer, and other streaming picks for September 2020

Netflix has been disruptive for everything in the home, he said. We are out ofthehome,so Netflix is complementary to us because most people who like film like film on all formats.

Ive seen the demise of the industry predicted definitely five or six times. We have been counter cyclical during downturns we are reasonably priced so people come out and enjoy what we have to offer.

Here is the original post:

How AI is being used to socially distance audiences at 'Tenet' and why Netflix is no threat, according to this movie theater chain boss - MarketWatch

3 Predictions For The Role Of Artificial Intelligence In Art And Design – Forbes

Christies made the headlines in 2018 when it became the first auction house to sell a painting created by AI. The painting, named Portrait of Edmond de Belamy, ended up selling for a cool $432,500, but more importantly, it demonstrated how intelligent machines are now perfectly capable of creating artwork.

3 Predictions For The Role Of Artificial Intelligence In Art And Design

It was only a matter of time, I suppose. Thanks to AI, machines have been able to learn more and more human functions, including the ability to see (think facial recognition technology), speak and write (chatbots being a prime example). Learning to create is a logical step on from mastering the basic human abilities. But will intelligent machines really rival humans remarkable capacity for creativity and design? To answer that question, here are my top three predictions for the role of AI in art and design.

1. Machines will be used to enhance human creativity (enhance being the key word)

Until we can fully understand the brains creative thought processes, its unlikely machines will learn to replicate them. As yet, theres still much we dont understand about human creativity. Those inspired ideas that pop into our brain seemingly out of nowhere. The eureka! moments of clarity that stop us in our tracks. Much of that thought process remains a mystery, which makes it difficult to replicate the same creative spark in machines.

Typically, then, machines have to be told what to create before they can produce the desired end result. The AI painting that sold at auction? It was created by an algorithm that had been trained on 15,000 pre-20th century portraits, and was programmed to compare its own work with those paintings.

The takeaway from this is that AI will largely be used to enhance human creativity, not replicate or replace it a process known as co-creativity." As an example of AI improving the creative process, IBM's Watson AI platform was used to create the first-ever AI-generated movie trailer, for the horror film Morgan. Watson analyzed visuals, sound, and composition from hundreds of other horror movie trailers before selecting appropriate scenes from Morgan for human editors to compile into a trailer. This reduced a process that usually takes weeks down to one day.

2. AI could help to overcome the limits of human creativity

Humans may excel at making sophisticated decisions and pulling ideas seemingly out of thin air, but human creativity does have its limitations. Most notably, were not great at producing a vast number of possible options and ideas to choose from. In fact, as a species, we tend to get overwhelmed and less decisive the more options were faced with! This is a problem for creativity because, as American chemist Linus Pauling the only person to have won two unshared Nobel Prizes put it, You cant have good ideas unless you have lots of ideas. This is where AI can be of huge benefit.

Intelligent machines have no problem coming up with infinite possible solutions and permutations, and then narrowing the field down to the most suitable options the ones that best fit the human creatives vision. In this way, machines could help us come up with new creative solutions that we couldnt possibly have come up with on our own.

For example, award-winning choreographer Wayne McGregor has collaborated with Google Arts & Culture Lab to come up with new, AI-driven choreography. An AI algorithm was trained on thousands of hours of McGregors videos, spanning 25 years of his career and as a result, the program came up with 400,000 McGregor-like sequences. In McGregors words, the tool gives you all of these new possibilities you couldnt have imagined.

3. Generative design is one area to watch

Much like in the creative arts, the world of design will likely shift towards greater collaboration between humans and AI. This brings us to generative design a cutting-edge field that uses intelligent software to enhance the work of human designers and engineers.

Very simply, the human designer inputs their design goals, specifications, and other requirements, and the software takes over to explore all possible designs that meet those criteria. Generative design could be utterly transformative for many industries, including architecture, construction, engineering, manufacturing, and consumer product design.

In one exciting example of generative design, renowned designer Philippe Starck collaborated with software company Autodesk to create a new chair design. Starck and his team set out the overarching vision for the chair and fed the AI system questions like, "Do you know how we can rest our bodies using the least amount of material?" From there, the software came up with multiple suitable designs to choose from. The final design an award-winning chair named "AI" debuted at Milan Design Week in 2019.

Machine co-creativity is just one of 25 technology trends that I believe will transform our society. Read more about these key trends including plenty of real-world examples in my new books, Tech Trends in Practice: The 25 Technologies That Are Driving The 4th Industrial Revolution and The Intelligence Revolution: Transforming Your Business With AI.

Here is the original post:

3 Predictions For The Role Of Artificial Intelligence In Art And Design - Forbes

The man behind Android says AI is the next major operating system – CNBC

The heart of Home is Essential's operating system, Ambient OS. Rubin didn't share much about the new software, but he did share his thoughts about how AI will become the next big operating system.

"I think it's AI. It's a slightly different AI than we see today. Today we see pattern matching and vision tricks and automation for self-driving cars and assistants like Siri or Google Assistant, but I think there's a thing after that that will coalesce into something that's more of an operating platform."

Rubin knows his own hardware company can't create the master AI platform alone, which is why his incubator Playground is so important.

"We're investing in hardware companies because we think they're essential in training AI," Rubin said. "One of our invested companies is called Light House. They make a camera for your home like a Dropcam except it uses AI to analyze everything that's happening in your house. You can ask if the kids went to school on time and it can answer."

Essential Home will allow you to play music through popular services, check the weather and more, all through a circular touchscreen.

But unlike other systems, like the Amazon Echo or Google Home, his plan is to create an OS that works with everything else. It's an ambitious goal with serious technical challenges, but Rubin knows enough about operating systems that he shouldn't be ignored.

See the rest here:

The man behind Android says AI is the next major operating system - CNBC

Worlds first AI-generated arts festival program opens this Friday – The Next Web

The Edinburgh Fringeisthe worlds largest performing arts festival, but this years event has sadly been canceled due to COVID-19. Fortunately, art junkies can still get their fix ofthe Fringe at a virtual alternative curated by an AI called the ImprovBot.

The system analyzed the100-word text descriptions of every show staged at the festival from 2011 to 2019 a total ofmore than two million words. ImprovBot uses this data to generate ideas for newcomedies, plays, musicals, and cabaret.

The blurbs will then be handed to the Improverts the Fringes longest-running improvised comedy troupe who will stage their own takes on the shows overTwitter.

[Read:How an AI graphic designer convinced clients it was human]

The aim of ImprovBot is to explore the junction of human creativity and comedy, and to see how this is affected when an Artificial Intelligence enters into the mix, saidMelissa Terras,Professor of Digital Cultural Heritage at the University of Edinburgh. It is [a] reminder of the playfulness of the Fringe and we invite online audiences to rise to the provocation, and interact, remix, mashup, and play with the content.

In total, ImprovBot aims to create 350 show descriptions, which will be posted every houron Twitter from August 7 to 31. Its already provided a sample of itsoeuvre, which ranges from a terrifying tale of isolation titledCollection to Politics to a hilarious comedy called The Man Behind the Real Song Lovers.

Truth be told, most of the blurbs are pretty nonsensical, so the Improverts will have a tough job adapting the AIs words for the stage. You can judge their efforts for yourself from Friday on Twitter.

Published August 5, 2020 17:21 UTC

Read the original here:

Worlds first AI-generated arts festival program opens this Friday - The Next Web

In a world where machines and AI rule, re-skilling is the only way out – YourStory.com

Gartner says more than 3 million workers across the world will have a robo boss by 2018. High time businesses reorient skill development programs to help mid-level managers stay relevant.

In July, the Vodafone-Idea merger was approved by the Competition Commission of India (CCI). The mega deal will make the shareholders of both companies become part of the largest telecom company in India,and reward them in the future. It will also create a situation that can quickly escalate into a nightmare.

As many as 6,000 senior-level leaders will have overlapping roles in the new entity. Industry sources say at least 50 percent of these will have to be let go and will be not employable. These individuals, who have put in at least 20 years of work in various roles within the organisation, have not been trained to keep pace with the digital era. But turning unemployable at the age of 45 is scary.

If the scene in Mumbai is bleak, in Chennai, the offices of ZohoCorp seem to have a Zen vibe.

Co-founder Sridhar Vembu is analysing technologies that can impact his organisation and his employees. He leaves no stone unturned when it comes to upskilling employees and is betting on technical languages that work for Zohos applications. Sridhar spends a lot of time with his engineers and almost 400-odd engineers have moved from the coding server to building applications on Android. At Zoho, senior engineers are constantly relearning static languages (Scala and Java) and are even playing with dynamic languages (JavaScript, ActionScript, Ruby on Rails).Sridhar says:

We can build a global organisation the Indian way. Unfortunately, all organisations use the western concept of hiring and firing, and focus on boosting shareholder returns. It is the problem of leaders who dont understand the impact they have on employees fired; after all, they just followed what they were told.

He adds that it is the responsibility of leaders to ensure employees are up to date with new technologies.

Even if people are talking about AI, you need human capital to train these machines in understanding data. I believe in contextual learning and people in Zoho are learning from different teams at any given point of time, says Sridhar.

He continues to believe that human capital is the greatest advantage in this era than ever before.

Indians perform all rituals; unfortunately god has left the temple. Today, we have moved from being spiritual to being ritual, Sridhar says, implying that today everyone follows a leader or pursues a task, but neither the leader nor employees think about a holistic approach to learning and building systems.

Narayana Murthy, Chairman Emeritus of Infosys, at the founders farewell dinner in 2015 urged his organisation to follow compassionate capitalism. He had said, It was our belief that it is only through excellence in people that we could achieve such growth.

With machine learning and AI skills requiring a deeper understanding of the industry, a manager has to retrain himself and ensure that the death of old legacy of businesses like maintenance of code and quality testing jobs do not affect new hires or people five years into the job.

Manoj Thelakkat, Founder of Reflex Training Partners, says, Training modules have to change from time-based and certification programmes to contextual learning. He says his organisation is teaching senior managers through theatre and music to understand collaborating in the world of AI, preparing organisations to reskill staff rather than sack them by looking at an Excel sheet.

Corporates are reskilling and realising that AI does not mean losing jobs, but a realignment of jobs.

In a recent survey by PWC titled Bot.Me: A revolutionary partnership more than 50 percent of respondents believed AI could help better healthcare, financial management, security and education. Less than 40 percent believed that it could create income and gender equality. In the next five years, jobs such as tutors, travel agents, tax preparers, office and home assistants, health coaches, chauffeurs and general physicians will get replaced.

The survey was limited to developing markets where growth has stagnated and the population is aging. This falls right into the table for India as these AI programs will be built by engineers.

Vijay Ratnaparkhe, Managing Director of Robert Bosch Engineering and Business Solutions Limited (RBEI), believes that one must not forget that today India is building software for the world and we are the brains powering future solutions.

Robert Bosch and its 15,000-plus engineers are building software for cars, which are learning about living objects by using mono-chrome cameras, ultrasound, radar and LIDAR technology. These are the kind of roles that engineers must prepare for in the coming days.

As new technologies emerge, Infosys is investing in upskilling and has designed programs to help mid-management levels keep pace with change.

Richard Lobo, Executive VP and HR Head at Infosys, says: Automation and related technology are the way forward and must be embraced by employees, irrespective of their role or job level.

He adds that new avenues have evolved rapidly, which need the company to reskill people on newer technologies and hire from outside to meet gaps in the skill mix. These include areas like user experience, cloud-native development, AI and industrial IoT, Big Data, Analytics and Automation.

In this environment, it is important that employees showcase high learnability and the ability to re-skill themselves rapidly, Richard says.

Infosys has created game-based learning methodologies where the program focuses on taking disagreements and turning them into positive solutions. This program enables managers to consciously embrace differences of opinion and create an environment that cuts through the competitive nature of conflicts, promoting collaboration among teams and partners.

The company has invested in agile and feels that it is the only process they have devoted purely to the middle layer. Ever since Vishal Sikka took over as CEO he is preparing employees onDesign Thinking, where Infosys understands the entire technology requirement from the business perspective. The company has trained 1,42,218 employees so far and wants everyone in the company to go through that change.

Infosys also works with Stanford to train senior leaders. The Stanford Global Leadership Program had 36 graduates in the first quarter of this financial year. Seventy senior people have completed the program so far and another 40-plus are in the current batch.

This is one-of-a-kind program to build our next generation leaders, Richard says.

Last quarter, Infosys finished training 3,000 people on AI technology, 2,100 of them on the Nia platform. It has currently created a bank of 3,500 videos available and has also partnered with Udacity and Coursera for different skills.

There is a reason for this rush to train employees in new skills. Clients are now asking IT services to be more in line with client success in winning business.

Daimler AG, for example, is working with Bosch to launch a fleet of autonomous cars in a five-year time frame. For this form of business, a new framework of data analytics services, network and infrastructure needs to be created. It is here that IT Services want to take a bet. They will use the current set of resources to build these new IT requirements. The days of doling out CVs could reduce and the engineering community has begun living in fear. But it is an era of constant learning.

Rajesh Kumar R, Delivery Head, Retail, CPG and Manufacturing at Mindtree, the $900-million IT services company, says: With the advent of any new technology there will be some impact to certain jobs. However, the concern or fear is due to this short-term impact. Focusing on reskilling and technology education can help employees stay relevant.

He adds that irrespective of automation, collaboration is imperative to be successful in the current environment. For example, a startup ecosystem produces amazing innovation that corporates and governments can adopt. Similarly businesses of various sizes will address different segments of the industry and all these will need to work together to address demands. The future of the industry is moving towards a highly collaborative environment.

Automation is impacting mid-level managers because automation is now touching the so-called knowledge worker-related areas, once thought not automatable, Rajesh says. He adds that far more cognitive tasks will be automated in the coming years and automation will happen more rapidly as we progress.

At Mindtree, learning is driven by Yorbit, the companys online enterprise learning platform that has yielded great results in just a year of its launch. Multiple knowledge sources are brought into this platform to enable directed self-learning complemented with project-driven assisted learning to put knowledge into practice.

Automation is likely to impact cognitive routine tasks and will shift the focus of human intervention to cognitive non-routine areas. For example, with ATMs, banks are less worried about the mechanics of collecting and distributing cash, instead the workforce is focused on investment advices and customer relations. If we take the automobile industry, the human focus is more on innovation, design and less on core manufacturing where quality manufacturing is taken for granted with heavy robots driven automation.

Digital is prompting organisations across industries to reinvent and reimagine their employee enablement and engagement strategies for better business success. The correlation between employee engagement and business performance is becoming increasingly relevant.

According to Gartner Research, by 2018 50 percent of team collaboration and communication will occur through mobile group collaboration apps. Organisations will have unified observational, social and people analytics to discover, design and share better work practices. While the workplace is transforming at a rapid pace, it requires your workforce to reimagine their future by adopting newer technologies. The role of leadership now also includes making the change easier for employees.

David Raj, EVP and HR Head at CSS Corp, an IT Services company, says: In this context, the mid-level management, the future leader/CXOs, in organisations also need to evolve and reinvent as traditional roles and structures come under increasing strain.

Indias IT workforce comprises roughly 1.4 million mid-level managers, and they are finding themselves at the centre of reskilling and restructuring conversations across organisations.

NASSCOM believes the IT industrys current reskilling focus is on emerging technologies like Big Data, Analytics, Cloud, IoT, Mobility, and Design Thinking, while also investing in emerging skills like Machine Learning, Natural Language Processing, Artificial Intelligence, DevOps, Robotic Process Automation, and Cybersecurity.

However, as Mohandas Pai, Chairman of Manipal Global Education and former member of the board of directors of Infosys, says: If growth beats job losses, employment will continue to grow. But we need to be prepared for automation.

There needs to be a constant evolution of skills by embracing concepts like job rotation and fluid teams.

David believes that adopting a mix of traditional and new-age learning methodologies, digital skilling platforms, along with a thrust on building full-stack professionals and institutionalising continuous learning, will play a pivotal role in creating the right differentiation and staying ahead of the pack.

Technology is changing fast and it is imperative for mid-level managers to seek out continuous learning opportunities.

Anand Venkateswaran, Vice President, Finance and Member, Board of Directors, Target India, says: We expose our managers to the latest technology trends, and provide opportunities where they can leverage these learnings to support personal development and drive business outcomes.

He adds that senior managers are given the opportunity to mentor and interact with startups to be in touch with the latest and most relevant industry developments.

Some of the technologies that managers have to reskill for are Machine Learning, Natural Language Processing (NLP), Python, Java, open source technologies and Computer Generated Imagery (CGI).

However, all this boils down to three things: leadership, an employees ability to learn and the corporates ability to train people quickly.

Employees should keep in mind that if they are working for a CEO or a corporate that does not believe in reskilling them, they must quit before they are sacked, Sridhar says.

According to Accenture, companies will have to adapt their training, performance and talent acquisition strategies to account for a newfound emphasis on work that hinges on human judgment and skills, including experimentation and collaboration. Their survey on the impact of AI on management revealed the following:

AI will put an end to administrative management work. Managers spend most of their time on tasks at which they know AI will excel in the future. Specifically, surveyed managers expect that AIs greatest impact will be on administrative coordination and control tasks, such as scheduling, resource allocation and reporting.

There is both readiness and resistance in the ranks. Unlike their counterparts in the C-suite, lower-level managers are much more skeptical about AIs promise and express greater concern over issues related to privacy. Younger managers are more receptive than older ones. And managers in emerging economies seem ready to leapfrog the competition by embracing AI.

The next-generation manager will thrive on judgment work. AI-driven upheaval will place a higher premium on what we call judgment work the application of human experience and expertise to critical business decisions and practices when information available is insufficient to suggest a successful course of action. This kind of work will require new skills and mindsets.

A people-first strategy is essential. Replacing people with machines is not a goal in itself. While artificial intelligence enables cost-cutting automation of routine work, it also empowers value-adding augmentation of human capabilities. The findings suggest that augmentation putting people first and using AI to amplify what they can achieve holds the biggest potential for value creation in management settings.

Executives must start experimenting with AI. Its high time executives and organisations start experimenting with AI and learning from these experiences. If the labour markets shortage of analytical talent is any guide, executives can ill afford to wait and see if they and their managers are equipped to work with AI and capable of acquiring the essential skills and work approaches.

With smart automation, quick robots and intelligent software bots becoming an integral part of the workforce, its critical that organisations and employees collaborate to forge the path ahead. Its the only way to deal with the charge of the light brigade.

Link:

In a world where machines and AI rule, re-skilling is the only way out - YourStory.com

NASA are figuring out how to use AI to build autonomous space … – ScienceAlert

Adding artificial intelligence to the machines we send out to explore space makes a lot of sense, as it means they can make decisions without waiting for instructions from Earth, and now NASA scientists are trying to figure out how it could be done.

As we send out more and more probes into space, some of them may have to operate completely autonomously, reacting to unknown and unexplained scenarios when they get to their destination and that's where AI comes in.

Steve Chien and Kiri Wagstaff from NASA's Jet Propulsion Laboratory think that these machines will also have to learn as they go, adapting to what they find beyond the reaches of our most powerful telescopes.

"By making their own exploration decisions, robotic spacecraft can conduct traditional science investigations more efficiently and even achieve otherwise impossible observations, such as responding to a short-lived plume at a comet millions of miles from Earth," write the researchers.

One example they give is AI that can tell the difference between a storm and normal weather conditions on a distant planet, making the readings that are being taken much more useful to scientists back home.

Just like Google uses AI to recognise dogs and cats in photos, an explorer buggy could learn to tell the difference between snow and ice, or between running water and still water, adding extra value and meaning to the data it gathers.

The researchers suggest AI-enabled probes could reach as far as Alpha Centauri, some 4.24 light-years away from Earth. Communications across that distance would be received by the generation after the scientists who launched the mission in the first place, so giving the probe a mind of its own would certainly speed up the decision-making process.

The next generation of AI robots will have to be able to detect "features of interest", detect unforeseen features, process and analyse data, and adapt their original plans where necessary, say the researchers.

And when smart probes get the chance to work together, the effects of AI will be even more powerful, as these artificial minds will be able to put their heads together to overcome challenges.

We are already seeing some of this artificial intelligence and autonomy out in space today. The Mars Curiosity rover has software on board that helps it to pick promising targets for its ChemCam a device that studies rocks and other geological features on the Red Planet.

By making its own decisions rather than always waiting for instructions from Earth, Curiosity is now much better at finding significant targets and is able to gather a larger haul of data, according to researchers.

Meanwhile the next rover to be sent to Mars in 2020 will be able to adjust its data collection processes based on the resources available, report Chien and Wagstaff.

In time, AI is going to become more and more important to space travel, the researchers say, and as artificial intelligence makes big strides forward here on Earth it's also set to have a big role in how we explore the rest of the Universe.

The research has been published in Science Robotics.

Here is the original post:

NASA are figuring out how to use AI to build autonomous space ... - ScienceAlert