Why testing must address the trust-based issues surrounding artificial intelligence – Aerospace Testing International

Words byJonathan Dyble

Aviation celebrates its 118th birthday this year. Over the years there have been many milestone advances, yet today engineers are still using the latest technology to enhance performance and transform capabilities in both the defence and commercial sectors.

Artificial Intelligence (AI) is arguably one of the most exciting areas of innovation and like many sectors, AI is garnering a great amount of attention in aviation.

Powered by significant advances in the processing power of computers, AI is today making aviation experts probe the opportunities of what was once seemingly impossible. It is worth noting that AI-related aviation transformation remains in its infant stages.

Given the huge risks and costs involved, full confidence and trust is required for autonomous systems to be deployed at scale. As a result, AI remains somewhat of a novelty in the aviation industry at present but attention is growing, progress continues to be made and the tide is beginning to turn.

One individual championing AI developments in aviation is Luuk Van Dijk, CEO and founder of Daedalean, a Zurich-based startup specializing in the autonomous operation of aircraft.

While Daedalean is focused on developing software for pilotless and affordable aircraft, Van Dijk is a staunch advocate of erring on the side of caution when it comes to deploying AI in an aviation environment.We have to be careful of what we mean by artificial intelligence, says Van Dijk. Any sufficiently advanced technology is indistinguishable from magic, and AI has always been referred to as the kind of thing we can almost but not quite do with computers. By that definition, AI has unlimited possible uses, but unfortunately none are ready today.

When we look at things that have only fairly recently become possible understanding an image for example that is obviously massively useful to people. But these are applications of modern machine learning and it is these that currently dominate the meaning of the term AI.

While such technologies remain somewhat in their infancy, the potential is clear to see.

Van Dijk says, When we consider a pilot, especially in VFR, they use their eyes to see where they are, where they can fly and where they can land. Systems that assist with these functions such as GPS and radio navigation, TCAS and ADS-B, PAPI [Precision approach path indicator], and ILS are limited. Strictly speaking they are all optional, and none can replace the use of your eyes.

With AI imagine that you can now use computer vision and machine learning to build systems that can help the pilot to see. That creates significant opportunities and possibilities it can reduce the workload in regular flight and in contingencies and therefore has the potential to make flying much safer and easier.

A significant reason why such technologies have not yet made their way into the cockpit is because of a lack of trust something that must be earned through rigorous, extensive testing. Yet the way mechanical systems and software is tested is significantly different, because of an added layer of complexity in the latter.

For any structural or mechanical part of an aircraft there are detailed protocols on how to conduct tests that are statistically sound and give you enough confidence to certify the system, says Van Dijk. Software is different. It is very hard to test because the failures typically depend on rare events in a discrete input space.

This was a problem that Daedalean encountered in its first project with the European Union Aviation Safety Agency (EASA), working to explore the use of neural networks in developing systems to measurably outperform humans on visual tasks such as navigation, landing guidance, and traffic detection.While the software design assurance approach that stems from the Software Considerations in Airborne Systems and Equipment Certification (DO-178C) works for more traditional software, its guidance was deemed to be only partially applicable to machine learned systems.

Instead of having human programmers translating high level functional and safety requirements into low-level design requirements and computer code, in machine learning a computer explores the design space of possible solutions given a very precisely defined target function that encodes the requirements, says Van Dijk.

If you can formulate your problem into this form, then it can be a very powerful technique, but you have to somehow come up with the evidence that the resulting system is fit for purpose and safe for use in the real world.

To achieve this, you have to show that the emergent behavior of a system meets the requirements. Thats not trivial and actually requires more care than building the system in the first place.

From these discoveries, Daedalean recently developed and released a joint report with EASA in the aim of maturing the concept of learning assurance and pinpointing trustworthy building blocks upon which AI applications could be tested thoroughly enough to be safely and confidently incorporated into an aircraft. The underlying statistical nature of machine learning systems actually makes them very conducive to evidence and arguments based on sufficient testing, Van Dijk confirms, summarizing the findings showcased in the report.

The requirements to the system then become traceable to the requirements on the test data you have to show that your test data is sufficiently representative of the data you will encounter during an actual flight.For that you must show that you have sampled any data with independence a term familiar to those versed in the art of design assurance, but something that has a much stricter mathematical meaning in this context.

Another person helping to make the strides needed to make the use of AI in the cockpit a reality is Dan Javorsek, Commander of Detachment 6, Air Force Operational Test and Evaluation Center (AFOTEC) at Nellis Air Force Base in Nevada. Javorsekt is also director of the F-35 US Operational Test Team and previously worked as a program manager for the Defense Advanced Research Projects Agency (DARPA) within its Strategic Technology Office.

Much like Van Dijk, Javorsek points to trust as being the key element in ensuring potentially transformational AI and automated systems in aircraft becoming accepted and incorporated more into future aircraft. Furthermore he believes that this will be hard to achieve using current test methods.Traditional trust-based research relies heavily on surveys taken after test events. These proved to be largely inadequate for a variety of reasons, but most notably their lack of diagnostics during different phases of a dynamic engagement, says Javorsek.

As part of his research, Javorsek attempted to address this challenge directly by building a trust measurement mechanism reliant upon a pilots physiology. Pilots attentions were divided between two primary tasks concurrently, forcing them to decide which task to accomplish and which to offload to an accompanying autonomous system.

Through these tests we were able to measure a host of physiological indicators shown by the pilots, from their heart rate and galvanic skin response to their gaze and pupil dwell times on different aspects of the cockpit environment, Javorsek says.

As a result, we end up with a metric for which contextual situations and which autonomous system behaviors give rise to manoeuvres that the pilots appropriately trust.

However a key challenge that Javorsek encountered during this research was related to the difficulty machines would have in assessing hard to anticipate events in what he describes as very messy military situations.

Real world scenarios will often throw up unusual tactics and situations, such as stale tracks and the presence of significant denial and deception on both sides of an engagement. In addition electronic jammers and repeaters are often used attempt to mimic and confuse an adversary.

This can lead to an environment prone to accidental fratricide that can be challenging for even the most seasoned and experienced pilots, Javorsek says. As a result, aircrews need to be very aware of the limitations of any autonomous system they are working with and employing on the battlefield.

It is perhaps for these reasons that Nick Gkikas, systems engineer for Airbus Defence and Space, human factors engineering and flight deck, argues that the most effective use of AI and machine learning is outside the cockpit at present. In aviation, AI and machine learning is most effective when it is used offline and on the ground in managing and exploiting big data from aircraft health and human-in / on-the-loop mission performance during training and operations, he says.

In the cockpit, most people imagine the implementation of machine learning as an R2D2 type of robot assistant. While such a capability may be possible today, it is currently still limited by the amount of processing power available on-board and the development of effective human-machine interfaces with machine agents in the system.

Gkikas agrees with Javorsek and Van Dijk in believing that AI currently hasnt be sufficiently developed to be part of the cockpit in an effective and safe manner. Until such technologies are more advanced, effectively tested, and able to be powered by an even greater sophistication in computing power, it seems AI may be better placed to be used in other aviation applications such as weapons systems.

Javorsek also believes it will be several years before AI and machine learning software will be successful in dynamically controlling the manoeuvres of fleet aircraft traditionally assigned to contemporary manned fighters. However, there is consensus amongst experts that there is undoubted potential for such technologies to be developed further and eventually incorporated within the cockpit of future aircraft.

For AI in the cockpit and in aircraft in general, I am confident we will see unmanned drones, eVTOL aircraft and similarly transformative technologies being rolled out beyond test environments in the not-so-distant future, concludes Van Dijk.

Read this article:
Why testing must address the trust-based issues surrounding artificial intelligence - Aerospace Testing International

Drive-thrus may soon use artificial intelligence and facial recognition – Deseret News

I live in a small town and am thrilled when the clerk at the local convenience store knows me so well that he can predict my drink order before I speak. But if every fast-food restaurant in the country also knew my drink order when I hit the drive-thru, would I get the same satisfaction or be creeped out?

We could soon find out because McDonalds has been testing drive-thru ordering using artificial intelligence. In some locations, the company is scanning license plates (with the customers permission) to help personalize the experience and predict orders, according to CNN Business. McDonalds CEO Chris Kempczinski told CNBC that 10 locations in Chicago are using voice-assistants to take orders in the drive-thru lanes and they are seeing about 85% accuracy.

The concept could spread globally sooner than you might think. Last week, McDonalds announced in a statement it would be partnering with IBM to further accelerate the development and deployment of its Automated Order Taking technology.

Jamie Richardson, a vice president with burger chain White Castle, told CNN Business its using artificial intelligence at an Indiana location, hoping it helps guarantee the visit is positive.

The thought is to make sure that its friendly. They remember me, they know who I am, he said.

And last year, amid COVID-19 and tight restrictions in California, several restaurants started offering face-pay for a completely contactless experience. PopPay is a service that allows users to connect a credit card and selfie to their account and then have their face scanned at participating locations to pay. There are dozens of restaurants and retailers in the Pasadena area offering the service at a kiosk, drive-thru or at the counter. The customer can also link loyalty accounts to their profile and receive a text when the transaction is complete.

A Russian bank currently offers a similar feature at some supermarkets in the country. Reuters reports Sberbank plans to offer facial recognition payments in 100 grocery locations and that a transaction would only take three seconds compared to 34 seconds when paying with cash and 15 seconds when using a payment card.

While this all may seem extremely convenient, many are concerned about privacy.

One Illinois man is suing McDonalds for using voice recognition technology in the drive-thru. Restaurant Business reported Shannon Carpenters lawsuit claims McDonalds violated the states Biometric Information Privacy Act, which requires companies get consent before collecting biometric information.

Some schools in England had hoped to use facial recognition in its lunchrooms for contactless payments and to get kids through the line faster. But the BBC reported that the schools in North Ayrshire have decided to push pause for now after the U.K.s Information Commissioners Office suggested the technology could be intrusive.

Facial recognition opportunities arent all about food, either. The travel industry is getting in on it as well.

Delta Airlines travelers in Atlanta will have the choice of showing their face to a camera instead of presenting an ID or handing over a boarding pass. To participate, passengers will need to opt in and store their TSA PreCheck membership and a SkyMiles number in the Fly Delta app.

At the airport, they can look into a camera at bag drop, security checkpoints and the gate instead of showing a boarding pass or ID. The image is encrypted and sent to U.S. Customs and Border Protections facial biometric matching service. This is completely voluntary and will roll out for security in the coming weeks and for checking bags and boarding before the end of the year.

In Russias capital, the Moscow Metro is using Face Pay at all its stations, according to The Moscow Times. After passengers use the app to connect their photo, credit card and transit card, they can simply look into a camera to enter.

Not everyone is excited about the rapid rise of facial recognition technology. A recent article from the National Law Review outlined several concerns, including hacking possibilities, accuracy issues and racial bias in some facial recognition algorithms.

If you or someone you know is concerned about too much facial recognition technology in the world, heres the perfect stocking stuffer. Anti-facial recognition glasses Reflectacles have infrared blocking lenses and reflective frames designed to fool any facial recognition system that comes your way. They may bring you some peace of mind, but itll likely take you longer to get through airport security than the woman next to you letting TSA scan her face.

Follow this link:
Drive-thrus may soon use artificial intelligence and facial recognition - Deseret News

Artificial intelligence: The new face of education – Economic Times

Artificial Intelligence is a field of study designed to infuse the power of thinking into machines. It can allow machines to understand our day-to-day activities and try to replicate them with utmost simplicity. From understanding our natural language to optimising our present solutions, AI can be used in almost all industries in general. In general, AI entails taking aspects of human intellect and applying them as algorithms in a computer-friendly manner. The output of AI is present in the form of text-to-speech services and voice assistants, which can simplify the easiest of tasks, such as calling or even texting someone. Such a versatile tool, AI is set to dominate the world's leading industries. As a result, AI must investigate how the human mind 'thinks, develops, and makes decisions' when attempting to resolve issues or carry out a project. The goal of AI is to advance technology by including functions related to human behaviours such as reasoning, learning, and problem-solving.

AI in schoolsAI is already a part of host of different ecosystems such as hospitals, factories and scientific laboratories etc., but the most striking use of AI is in the business arena; from our super-fast food delivery apps to our responsive cab drivers, most of these latest services are integrated with AI. But, the notion of AI in schools is often overlooked. It is usually thought of replacing teachers and appointing some boring robots. But what exactly needs to be made is a system, where teachers and online bots can coexist, and create learning personalized for each student. Personalised learning refers to giving each student, a unique experience of learning, which supposedly begins with their doubts and queries being resolved, and then focusing on improving their academic performance.

Back in the day, when Google, the search engine, was introduced, it identified the keywords, used for the query, and then displayed the results, which matched with the keywords. But, the latest softwares (voice assistants, mainly) use the identification and processing of intents, which is a much more effective and a smarter solution. With this principle and methodology in mind, we can move ahead and look at the use of AI in this budding sector. Similar to the principle behind voice assistants, if we can find the intention of the query of a particular student, that is half battle won. With that said, identifying the lesson or sub-topic, from which the question is being asked is of paramount importance. Then, the assistant (either voice or chat) can be loaded with solutions, of some of the major subtopics and concepts, which should cover the major part of the query. In case the query is a bit out of the box, we can tackle it by using a closure statement, which should also include useful links and documents, to help them solve their queries. Since we are talking about a digital extension to classrooms, we can include some external links to videos or even notes for the chapter, for the students to clarify it as much as they can by themselves.

The pandemic actually gave us a whole new perspective on online learning. With the online meets replacing actual classes, both the students and the teachers now understand the working of the online classes very well. But now, with the pandemic on decline, we should not neglect the online mode of learning, instead nurture it with the use of AI. Online classrooms, if permanent, can easily result in the improvement of grades as well as the understanding of concepts, and can also act an additional platform, which they can look into, whenever needed. Though these are just some ideas and ways of utilising our resources to the best, we still have a long way to go, as far as developing AI technologies in India is concerned. It is not something that just sits in Tony Starks lab, and can simulate time travel, it is something that requires days and months to be developed, and even years to be implemented in each and every corner of our country.

Amit Kapoor is chair, Institute for Competitiveness, India and visiting scholar, Stanford University. Praveen Senthil is researcher at large at Institute for Competitiveness.

Read the rest here:
Artificial intelligence: The new face of education - Economic Times

Maureen Dowd: What will a world of artificial intelligence look like? – Salt Lake Tribune

The first time I interviewed Eric Schmidt, a dozen years ago when he was the CEO of Google, I had a simple question about the technology that has grown capable of spying on and monetizing all our movements, opinions, relationships and tastes.

Friend or foe? I asked.

We claim were friends, Schmidt replied coolly.

Now that the former Google executive has a book out Tuesday on The Age of AI, written with Henry Kissinger and Daniel Huttenlocher, I wanted to ask him the same question about AI: Friend or foe?

AI is imprecise, which means that it can be unreliable as a partner, he said when we met at his Manhattan office. Its dynamic in the sense that its changing all the time. Its emergent and does things that you dont expect. And, most importantly, its capable of learning.

It will be everywhere. What does an AI-enabled best friend look like, especially to a child? What does AI-enabled war look like? Does AI perceive aspects of reality that we dont? Is it possible that AI will see things that humans cannot comprehend?

I agree with Elon Musk that when we build AI without a kill switch, we are summoning the demon and that humans could end up, as Steve Wozniak said, as the family pets. (If were lucky.)

Talking about the alarms raised by the likes of Musk and Stephen Hawking, Schmidt said that they think that by unleashing AI, eventually, youll end up with a robot overlord thats 10 or 100 or 1,000 times smarter than the humans. My answer is different. I think all the evidence is that these AI systems are going to think, not like humans, but theyre going to be very smart. Were going to have to coexist.

You dont think Siri and Alexa are going to kill us one night?

No, he said. But they might become your childs best friend.

Opinions on AI are wildly divergent. Jaron Lanier, the father of virtual reality, rolls his eyes at the digerati in Silicon Valley obsessed with the science-fiction fantasy of AI.

It can sometimes become a giant, false god, he told me. Youve got these nerdy guys who have an awful reputation for how they treat women, who get to be the life creators. You women with your petty little biological wombs cant stand up to us. Were making the big life here. Were the supergods of the future.

We have known for a while that Silicon Valley is taking us down the drain. Preposterous claims that once could not have gotten traction on everything from Democratic pedophilia rings to rigged elections to vaccine conspiracy theories now spread at the speed of light. Teenage girls can be sent spiraling into depression by the glossy, deceptive world of Instagram, owned by the manipulative and greedy company formerly known as Facebook.

Schmidt said an Oxford student told him, about social media poison, The union of boredom and anonymity is dangerous. Especially at the intersection of addiction and envy.

The question of whether we will lose control to AI may be passe. Technology is already manipulating us.

Schmidt admits that the lack of foresight among the lords of the cloud about where technology was headed was foolish.

Ill say, 10 years ago, when I worked really hard on these social networks, maybe this is just naivet, but we never thought that governments would use them against citizens, like in 2016, with interference from the Russians.

We didnt think it would then stitch these special interest groups together with these violently strong belief systems. No one ever discussed it. I dont want to make the same mistake again with a new foundational technology.

He said the National Security Commission on Artificial Intelligence, which he chaired this year, concluded that America is still a little bit ahead of China in the technology race but that China is overinvesting against us. The authors write that they are most worried about other countries developing AI-facilitated weapons with substantial destructive potential that may be able to adapt and learn well beyond their intended targets.

The first thing for us to look at between the U.S. and China is to make sure that theres no Dr. Strangelove scenario, a launch on a warning, to make sure theres time for human decision making, he said. Lets imagine youre on a ship in the future and the little computer system says to the captain, You have 24 seconds before youre dead because the hypersonic missile is coming at you. You need to press this button now. You want to trust the AI, but because of its imprecise nature, what if it makes a mistake?

I asked if he thought Facebook could leave its troubles behind by changing its name to Meta.

The problem is, what do you now call FAANG stocks? MAANG? he said of the biggest tech stocks Facebook, Apple, Amazon, Netflix and Google. Google changed its name to Alphabet, and yet, Google was still Google.

And whats with that creepy metaverse that Mark Zuckerberg is trying to lure us into?

All of the people who talk about metaverses are talking about worlds that are more satisfying than the current world youre richer, more handsome, more beautiful, more powerful, faster. So, in some years, people will choose to spend more time with their goggles on in the metaverse. And who gets to set the rules? The world will become more digital than physical. And thats not necessarily the best thing for human society.

Schmidt said his book poses questions that cannot yet be answered.

Unfortunately for us, we wont know the answers until it is too late.

Maureen Dowd(CREDIT: The New York Times)

Maureen Dowd is a Pulitzer Prize-winning columnist for The New York Times.

See the original post here:
Maureen Dowd: What will a world of artificial intelligence look like? - Salt Lake Tribune

New research center at UMass Amherst will use Artificial Intelligence to improve at-home care for elderly patients – The Boston Globe

Whats next for the ever-developing industry of heartbeat-monitoring wristwatches and voice-responsive phones? According to the leaders of a newly endowed Massachusetts research center, the devices built-in Artificial Intelligence could prove useful in improving the quality of at-home care for the elderly.

Unveiled this week, the Massachusetts AI and Technology Center for Connected Care in Aging and Alzheimers Disease, based at the University of Massachusetts Amherst, will work with a confluence of new and existing technologies, drawing on AI to modernize the at-home care industry for those with age-related ailments and Alzheimers Disease patients.

The project seeks to address what its founders see as a major healthcare disparity that can leave the elderly with a vexing choice: stay home and receive a lower level of care, or leave home for proper treatment.

More than 90% of older Americans would prefer to stay in their homes as they age, a press release announcing the new center said. However, the prevalence of chronic illness, including Alzheimers disease, can make the goal of successful aging at home out of reach without substantial support.

Computer scientists and doctors from Brigham and Womens Hospital, Massachusetts General Hospital, Brandeis University, and Northeastern University will partner on the research, which will be funded by roughly $20 million in grants from the National Institute on Aging distributed over the next five years.

Artificial intelligence has the potential to transform important areas of science and medicine, but there is a critical need to bring the power of AI to the patients, caregivers and clinicians who need it most, Paul Anderson, senior vice president of research and education at Brigham and Womens Hospital, said in the press release. This grant will allow experts from across our state to come together to help address this key gap.

If successful, the research would utilize AI to deliver, manage, and adapt treatment and intervention regimes for those with age afflictions. So what does that look like?

A key component, said Deepak Ganesan, a professor in UMass Amhersts Robert and Donna Manning College of Information and Computer Sciences, will be improving on the technologies that already exist in devices like smartphones and Apple Watches.

[We] may look at leveraging existing mobile and wearable devices such as smartphones in new ways, he said in an email. For example, voice-based interaction using a smartphone may be used to look at changes in the voice patterns that can be used to detect subtle changes in cognitive and physical function for patients with Alzheimers.

Devices like Apple Watches and Fitbits, which track the steps of the wearer, can be inaccurate when used by older users, he said, because they are not calibrated to track lower speeds. And new sleep trackers can lose accuracy in users with sleep disorders or who wake up to take medications.

Some of the focus will be on adapting the algorithms such that they can be more accurate when monitoring older adults with a range of impairments, Ganesan said.

The center will also work with new technologies, he said, like devices that allow for patient monitoring without requiring them to wear anything.

And a key component of the research will be distilling the data gathered from patient cohorts and presenting it to patients, caregivers, and clinicians in a digestible way. Together, the adapted technologies and data could create a new system for monitoring elderly patients who want to remain home that sends help when its needed.

Its a difficult problem to develop AI-enhanced sensing technologies that work for people where they are, Ganesan said in the press release. How do you get good, useful data? How do you analyze this data and present it to the patient, caregiver and clinician? And then how can you intervene in a timely manner when a problem develops?

Andrew Brinker can be reached at andrew.brinker@globe.com. Follow him on Twitter at @andrewnbrinker.

Here is the original post:
New research center at UMass Amherst will use Artificial Intelligence to improve at-home care for elderly patients - The Boston Globe

FEATURE: How is artificial intelligence changing these five industries? – Nantwich News

Technology is growing at an exponential rate.

Smart devices are integrated into our everyday activities from your homes heating system to the coffee machine.

Artificial intelligence, also known as AI, has developed so quickly that its hard to keep track.

Many of us encounter AI on a daily basis without even realising it.

Technology has transformed all kinds of industries in recent years from retail to public transport.

AI includes robotics, machine learning, automation, natural language processing and much more.

Lets take a closer look at how AI has impacted these five industries.

Education

AI does not suffer from human bias. It can analyse the profiles of children and produce challenges and solutions for each child.

Of course, a good teacher could do the same thing but it would take much longer.

AI is far more efficient and less likely to make a mistake.

AI plays a big role in the development of children these days and can help us identify learning difficulties.

We can also personalise teaching methods through AI. Everyone learns and tests differently.

With AI, we can adapt the classroom to each student and provide a bespoke learning experience.

Retail

Artificial intelligence can streamline processes and improve customer service.

We have all experienced the frustration of talking to a customer service robot.

In the future, AI will only enhance the customer service experience, and you will still get to talk to real people.

Hopefully, it will help you to access information much more easily and contact customer service reps.

Healthcare

There are likely to be more robots in surgery and virtual nurses.

Sounds terrifying, right? AI will make diagnoses, perform procedures and automate medication services.

Healthcare will become much more efficient, and hopefully, there will be fewer medical negligence cases.

Construction

AI is already embedded in construction power tools.

It can tell you the battery level, temperature and whether anything is broken within the tool.

AI can reduce the number of risks on construction sites and help workers to use tools safely.

But the benefits of machine learning dont stop at safety management.

Director of Product for Milwaukee Power Tools, Steve Matson, commented: There is an interesting runway in terms of what we can do with the machine learning model when applied to locations.

The company has been incorporating new location technology into their tools, making them easier to find. Matson added There is a little bit more secret sauce on the horizon as it pertains to tools.

Public transport

AI analyses the data and best routes available for public transport systems.

You can plan out your journey with the help of artificial intelligence. It will calculate traffic delays, accidents and any roadworks on your journey.

People are far more likely to use public transport when they know exactly where to go and what service to get.

Say goodbye to scanning bus timetables, and hello to the new world of public transport.

Artificial intelligence has greatly benefited the modern world and improved the efficiency of numerous sectors.

Do your research and find out if AI can enhance your life today.

(Pic by mikemacmarketing)

See the rest here:
FEATURE: How is artificial intelligence changing these five industries? - Nantwich News

Renowned Intelligent Speech and Artificial Intelligence Public Listed Company, iFLYTEK Enters a Memorandum of Understanding With Enterprise Singapores…

SINGAPORE, November 03, 2021--(BUSINESS WIRE)--With a market capitalization of US$19 billion, the publicly listed iFLYTEK Co., Ltd. signed a Memorandum of Understanding (MOU) with XNode on Tuesday, 26 October 2021 at Pan Pacific Singapore during the iFLYTEK 1024 Global Developer Festival, organised by XNode Singapore and supported by the Singapore Deep-Tech Alliance. The MOU details XNodes support to facilitate iFLYTEKs entry and expansion into South-East Asia markets.

This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20211103005582/en/

MOU Signing between XNode and iFLYTEK representatives. Left: Ms Clara Chen, GM of XNode Singapore; Right: Ms Zhen Zhen from iFLYTEK. (Photo: Business Wire)

According to Clara Chen, General Manager of XNode Singapore, "Having built global companies from this city state for the last seventeen years has taught me a few things:

Because startups are launched here to be global from the get-go; Singaporeans instinctively cater to varying economic developments and diverse cultures in building companies,

Localisation of the same product offerings, be it in China, India or Indonesia, is not just about translation, but about context, culture, brand perception, and geopolitical and market realities, and

Trying to conquer ASEAN alone is harder than trying to expand into large homogenous economies like China or the US".

"On the other hand, China's tech giants have a huge domestic market and the drive to put themselves on an international stage. In this regard, Singapores Internationalisation know-how could hugely benefit innovative Chinese companies and open new frontiers together with them."

iFLYTEKs visions of enabling machines to listen and speak, and understand and think is all to create a better world with artificial intelligence. The company creates value by easing the burden on teachers and students in schools through teaching students according to their aptitudes. And in healthcare, by providing better, faster responses to medical emergencies.

Story continues

The simulcast festival had earlier called for entries for its themed challenge, aptly titled "A.I. for Intelligent Lifestyle Challenge". More than 9,000 applications and over 700 delivered projects were received for this global challenge. The final Top 3 teams are then selected to pitch at the live forum, attended virtually by hundreds of iFLYTEK senior executives, venture capitalists, media, academia, and government representatives.

For the challenge, the Top 3 Teams are Tictag, led by Mr Kevin Quah; Gleematics, led by Ms Ada Lim; and TopView, led by Mr George Tharian. Judging the 3 Teams are Dr Pauline Tay, Executive Director, Head of Innovation Partnerships, Tech Connect SEA, UBS AG, Singapore; Mr Luuk Eliens, Founding Partner of Singapore Deep-Tech Alliance; and Ms Clara Chen, General Manager of XNode Singapore.

iFLYTEK demonstrated their proprietary real-time translation technology to event attendees at Hefei, China, during the live streaming of the Final Pitch and the "Smart A.I. Education" panel discussion.

The panel speakers for the "Smart A.I. Education" are Dr Andreas Deppeler, Adjunct Associate Professor and Deputy Director of the Centre on AI Technology for Humankind at NUS Business School, National University of Singapore; Mr Koo Sengmeng, Senior Deputy Director for AI Innovation at AI Singapore; and Dr James Ong, Founder and CEO of Origami and Adjunct Professor at Singapore University of Technology and Design. The panel was moderated by Mr Luuk Eliens, Founding Partner of Singapore Deep-Tech Alliance.1

The panelists explored the topic of the existing use of A.I. in the education industry and its impact on educators and learners; as well as the ethics, risk and governance surrounding around A.I..

The esteemed panel speakers concluded with the notion that the future of A.I is open source. And that this future is already here.

High Resolution Photos are available on this Google Drive: [LINK]

Annex 1 - Panel Speakers Profiles

Dr Andreas Deppeler is an Adjunct Associate Professor and Deputy Director of the Centre on AI Technology for Humankind at NUS Business School, National University of Singapore. He teaches courses on technology, innovation, data value and digital strategy. His research focuses on the economic and societal implications of artificial intelligence. He received a Ph.D. in Theoretical Physics from Rutgers University.

Mr Koo Sengmeng is the Senior Deputy Director for AI Innovation at AI Singapore where he leads the talent and certification programmes and initiatives. He contributes regularly to the technology ecosystem and holds official appointments in PDPC AI Governance Roundtable, IEEE AI Standards Committee and ISO SC42. He co-founded AI Professionals Association in 2020 and holds advisory positions in Singapore Computer Society, Serious Games Association and Chulalongkorn University Technology Center.

Dr James Ong is an entrepreneur and community builder who has incubated, invested and 3. China and ASEAN. He is the founder and CEO of Origami that provides strategy, technology and investment advisory services for venturing towards Autonomous Enterprise, founded Artificial Intelligence International Institute (AIII), a think tank advocating Sustainable AI for Humanity and also adjunct professor at SUTD.

Moderator - Mr Luuk Eliens, Founding Partner of Singapore Deep-Tech Alliance. Luuk started his first business at the age of seventeen and has been an entrepreneur ever since. To date, Luuk founded three businesses in the fields of energy monitoring, education and software quality. As a business leader and entrepreneur with a demonstrated track-record in innovation and technology across multiple industries and continents, Luuk has vast experience with innovation from inception to product launch and has guided hundreds of startups and corporate clients to growth and investment.

About iFLYTEK

Founded in 1999, iFLYTEK is a well-known intelligent speech and artificial intelligence publicly listed company in the Asia-Pacific Region. Since its establishment, the company is devoted to cornerstone technological research in speech and languages, natural language understanding, machine learning, machine reasoning, adaptive learning, and has maintained the world-leading position in those domains. The company actively promotes the development of A.I. products and their sector-based applications, with visions of enabling machines to listen and speak, understand and think, creating a better world with artificial intelligence. In 2008, iFLYTEK went public on the Shenzhen Stock Exchange (stock code 002230).

For more information, please visit https://www.iflytek.com/

About XNode

XNode is Enterprise Singapores Global Innovation Alliance (GIA) partner for China to help Singapore technology startups and SMEs set up, test-bed and commercialise their solutions, or co-innovate with partners in Shanghai and Shenzhen through a series of highly-customised programmes and activities that will grant them access to the Chinese market, including potential investors, partners, customers and talent resources.

Connect with us on Website | LinkedIn | Facebook

About Singapore Deep-Tech Alliance

Singapore Deep-Tech Alliance (SDTA) is an impact-driven deep-tech venture builder that brings together entrepreneurs and technical talents to take advanced technologies from lab to market in 9 months. The Alliance's mission is to reduce the environmental impact of businesses by empowering founders to rapidly build, validate and scale Industry 4.0 startups and supporting them with world-class technologies, investment, network and skills. A public-private partnership between XNode, A*Star, and NHIC, SDTA Partners include corporations such as OMRON, Micron, TV SD, Sunningdale Tech Ltd, and PlanetSpark.

Connect with us on Website | LinkedIn | YouTube

1Annex 1 Panel Speakers Profiles

View source version on businesswire.com: https://www.businesswire.com/news/home/20211103005582/en/

Contacts

For media enquiries:

Ms Clara ChenGeneral ManagerXNode SingaporeE: Clara.Chen@theXNode.sg M: +65 9437 1808

Mr Jeffery WangDirector of South Pacific Region, International Cooperation DivisioniFLYTEK Co., Ltd.E: hrwang3@iflytek.com M: +86-186-559-591-00

Go here to see the original:
Renowned Intelligent Speech and Artificial Intelligence Public Listed Company, iFLYTEK Enters a Memorandum of Understanding With Enterprise Singapores...

7 Risks Of Artificial Intelligence You Should Know | Built In

Last March, at the South by Southwest tech conference in Austin, Texas, Tesla and SpaceXfounder Elon Musk issued a friendly warning: Mark my words, he said, billionaire casualin a furry-collared bomber jacket and days old scruff, AIis far more dangerous than nukes.

No shrinking violet, especially when it comes to opining about technology, the outspoken Musk has repeated a version of these artificial intelligence premonitions in other settings as well.

I am really quite close to the cutting edge in AI, and it scares the hell out of me, he told his SXSW audience. Its capable of vastly more than almost anyone knows, and the rate of improvement is exponential.

Musk, though, is far from alone in his exceedingly skeptical (some might say bleakly alarmist) views. A year prior, the late physicist Stephen Hawking was similarly forthright when he told an audience in Portugal that AIs impact could be cataclysmic unless its rapid development is strictly and ethically controlled.

Unless we learn how to prepare for, and avoid, the potential risks, he explained, AI could be the worst event in the history of our civilization.

Considering the number and scope of unfathomably horrible events in world history, thats really saying something.

And in case we havent driven home the point quite firmly enough, research fellow Stuart Armstrong from the Future of Life Institute has spoken of AI as an extinction risk were it to go rogue. Even nuclear war, he said, is on a different level destruction-wise because it would kill only a relatively small proportion of the planet. Ditto pandemics, even at their more virulent.

If AI went bad, and 95 percent of humans were killed, he said, then the remaining five percent would be extinguished soon after. So despite its uncertainty, it has certain features of very bad risks.

How, exactly, would AI arrive at such a perilous point? Cognitive scientist and author Gary Marcus offered some details in an illuminating 2013 New Yorker essay. The smarter machines become, he wrote, themore their goals could shift.

Once computers can effectively reprogram themselves, and successively improve themselves, leading to a so-called technological singularity or intelligence explosion, the risks of machines outwitting humans in battles for resources and self-preservation cannot simply be dismissed.

As AI grows more sophisticated and ubiquitous, the voices warning against its current and future pitfalls grow louder. Whether it's the increasing automation of certain jobs, gender and racial bias issues stemming from outdated information sources orautonomous weapons that operate without human oversight (to name just a few), unease abounds on a number of fronts. And were still in the very early stages.

The tech community has long-debated the threats posed by artificial intelligence. Automation of jobs, the spread of fake news and a dangerous arms race of AI-powered weaponry have been proposed as a few of the biggest dangers posed by AI.

Destructive superintelligence aka artificial general intelligence thats created by humans and escapes our control to wreak havoc is in a category of its own. Its also something that might or might not come to fruition (theories vary), so at this point its less risk than hypothetical threat and ever-looming source of existential dread.

Job automation is generally viewed as the most immediate concern. Its no longer a matter of if AI will replace certain types of jobs, but to what degree. In many industries particularly but not exclusively those whose workers perform predictable and repetitive tasks disruptioniswell underway. According to a 2019 Brookings Institution study, 36 million people work in jobs with high exposure to automation, meaning that before long at least 70 percent of their tasks ranging from retail sales and market analysis to hospitality and warehouse labor will be done using AI. An even newer Brookings report concludes that white collar jobs may actuallybe most at risk. And per a 2018 report from McKinsey & Company, the African American workforce will be hardest hit.

The reason we have a low unemployment rate, which doesnt actually capture people that arent looking for work, is largely that lower-wage service sector jobs have been pretty robustly created by this economy, renowned futurist Martin Ford (left) told Built In. I dont think thats going to continue.

As AI robots become smarter and more dextrous, he added, the same tasks will require fewer humans. And while its true that AI will create jobs, an unspecified number of which remain undefined, many will be inaccessible to less educationally advanced members of the displaced workforce.

If youre flipping burgers at McDonalds and more automation comes in, is one of these new jobs going to be a good match for you? Ford said. Or is it likely that the new job requires lots of education or training or maybe even intrinsic talents really strong interpersonal skills or creativity that you might not have? Because those are the things that, at least so far, computers are not very good at.

John C. Havens, author of Heartificial Intelligence: Embracing Humanity andMaximizing Machines, calls bull on the theory that AI will create as many or more jobs than it replaces.

About four years ago, Havens said, he interviewed the head of a law firm about machine learning. The man wanted to hire more people, but he was also obliged to achieve a certain level of returns for his shareholders. A $200,000 piece of software, he discovered, could take the place of ten people drawing salaries of $100,000 each. That meant hed save $800,000. The software would also increase productivity by 70 percent and eradicate roughly 95 percent of errors. From a purely shareholder-centric, single bottom-line perspective, Havens said, there is no legal reason that he shouldnt fire all the humans. Would he feel bad about it? Of course. But thats beside the point.

Even professions that require graduate degrees and additional post-college training arent immune to AI displacement. In fact, technology strategist Chris Messina said, some of them may well be decimated. AI already is having a significant impact on medicine. Law and accounting are next, Messina said, the former being poised for a massive shakeup.

Think about the complexity of contracts, and really diving in and understanding what it takes to create a perfect deal structure, he said. It's a lot of attorneys reading through a lot of information hundreds or thousands of pages of data and documents. Its really easy to miss things. So AI that has the ability to comb through and comprehensively deliver the best possible contract for the outcome you're trying to achieve is probably going to replace a lot of corporate attorneys.

Accountants should also prepare for a big shift, Messina warned. Once AI is able to quickly comb through reams of data to make automatic decisions based on computational interpretations, human auditors may well be unnecessary.

While job loss is currently the most pressing issue related to AI disruption, its merely one among many potential risks. In a February 2018 paper titled The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, 26 researchers from 14 institutions (academic, civil and industry) enumerated a host of other dangers that could cause serious harm or, at minimum, sow minor chaos in less than five years.

Malicious use of AI, they wrote in their 100-page report, could threaten digital security (e.g. through criminals training machines to hack or socially engineer victims at human or superhuman levels of performance), physical security (e.g. non-state actors weaponizing consumer drones), and political security (e.g. through privacy-eliminating surveillance, profiling, and repression, or through automated and targeted disinformation campaigns).

In addition to its more existential threat, Ford is focused on the way AI will adversely affect privacy and security. A prime example, he said, is Chinas Orwellian use of facial recognition technology in offices, schools and other venues. But thats just one country. A whole ecosphere of companies specialize in similar tech and sell it around the world.

What we can so far only guess at is whether thattech will ever become normalized.As with the internet, where we blithely sacrifice our digital data at the altar of convenience, will round-the-clock, AI-analyzed monitoring someday seem like a fair trade-off for increased safety and security despite its nefarious exploitation by bad actors?

Authoritarian regimes use or are going to use it, Ford said. The question is, How much does it invade Western countries, democracies, and what constraints do we put on it?

AI will also give rise to hyper-real-seeming social media personalities that are very difficult to differentiate from real ones, Ford said. Deployed cheaply and at scale on Twitter, Facebook or Instagram, they could conceivably influence an election.

The same goes for so-called audio and video deepfakes created by manipulating voices and likenesses. The latter is already making waves. But the former, Ford thinks, will prove immensely troublesome. Using machine learning, a subset of AI thats involved in natural language processing, an audio clip of any given politician could be manipulated to make it seem as if that person spouted racist or sexist views when in fact they uttered nothing of the sort. If the clips quality is high enough so as to fool the general public and avoid detection, Ford added, it could completely derail a political campaign.

And all it takes is one success.

From that point on, he noted, no one knows whats real and whats not. So it really leads to a situation where you literally cannot believe your own eyes and ears; you can't rely on what, historically, weve considered to be the best possible evidence Thats going to be a huge issue.

Lawmakers, though frequently less than tech-savvy, are acutely aware and pressing for solutions.

Widening socioeconomicinequality sparked by AI-driven job loss is another cause for concern. Along with education, work has long been a driver of social mobility. However, when its a certain kind of work the predictable, repetitive kind thats prone to AI takeover research has shown that those who find themselves out in the cold are much less apt to get or seek retraining compared to those in higher-level positions who have more money. (Then again, not everyone believes that.)

Various forms of AI bias are detrimental, too. Speaking recently to the New York Times, Princeton computer science professor Olga Russakovsky said it goes well beyond gender and race. In addition to data and algorithmic bias (the latter of which can amplify the former), AI is developed by humans and humans are inherently biased.

A.I. researchers are primarily people who are male, who come from certain racial demographics, who grew up in high socioeconomic areas, primarily people without disabilities, Russakovsky said. Were a fairly homogeneous population, so its a challenge to think broadly about world issues.

In the same article, Google researcher Timnit Gebru said the root of bias is social rather than technological, and called scientists like herself some of the most dangerous people in the world, because we have this illusion of objectivity. The scientific field, she noted, has to be situated in trying to understand the social dynamics of the world, because most of the radical change happens at the social level.

And technologists arent alone in sounding the alarm about AIs potential socio-economic pitfalls. Along with journalists and political figures, Pope Francis is also speaking up and hes not just whistling Sanctus. At a late-September Vatican meeting titled, The Common Good in the Digital Age, Francis warned that AI has the ability to circulate tendentious opinions and false data that could poison public debates and even manipulate the opinions of millions of people, to the point of endangering the very institutions that guarantee peaceful civil coexistence.

If mankinds so-called technological progress were to become an enemy of the common good, he added, this would lead to an unfortunate regression to a form of barbarism dictated by the law of the strongest.

A big part of the problem, Messina said, is the private sectors pursuit of profit above all else. Because thats what theyre supposed to do, he said. And so theyre not thinking of, Whats the best thing here? Whats going to have the best possible outcome?

The mentality is, If we can do it, we should try it; lets see what happens, he added. And if we can make money off it, well do a whole bunch of it. But thats not unique to technology. Thats been happening forever.

Not everyone agrees with Musk that AI is more dangerous than nukes, including Ford. But what if AI decides to launch nukes or, say, biological weapons sans human intervention? Or, whatif an enemy manipulates data to return AI-guided missiles whence they came? Both are possibilities. And both would be disastrous. The more than 30,000 AI/robotics researchers and others who signed an open letter on the subject in 2015 certainly think so.

The key question for humanity today is whether to start a global AI arms race or to prevent it from starting, they wrote. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

(The U.S. Militarys proposed budget for 2020 is $718 billion. Of that amount, nearly $1 billion would support AI and machine learning for things like logistics, intelligence analysis and, yes, weaponry.)

Earlier this year, a story in Vox detailed a frightening scenario involving the development of a sophisticated AI system with the goal of, say, estimating some number with high confidence. The AI realizes it can achieve more confidence in its calculation if it uses all the worlds computing hardware, and it realizes that releasing a biological superweapon to wipe out humanity would allow it free use of all the hardware. Having exterminated humanity, it then calculates the number with higher confidence.

Thats jarring, sure. But rest easy. In 2012 the Obama Administrations Department of Defense issued a directive regarding Autonomy in Weapon Systems that included this line: Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.

And in early November of this year, a Pentagon group called the Defense Innovation Board published ethical guidelines regarding the design and deployment of AI-enabled weapons. According to the Washington Post, however, the boards recommendations are in no way legally binding. It now falls to the Pentagon to determine how and whether to proceed with them.

Well, thats a relief. Or not.

Have you ever considered that algorithms could bring down our entire financial system? Thats right, Wall Street. You might want to take notice. Algorithmic trading could be responsible for our next major financial crisis in the markets.

What is algorithmic trading? This type of trading occurs when a computer, unencumbered by the instincts or emotions that could cloud a humans judgement, execute trades based off of pre-programmed instructions. These computers can make extremely high-volume, high-frequency and high-value trades that can lead to big losses and extreme market volatility. Algorithmic High-Frequency Trading (HFT) is proving to be a huge risk factor in our markets. HFT is essentially when a computer places thousands of trades at blistering speeds with the goal of selling a few seconds later for small profits. Thousands of these trades every second can equal a pretty hefty chunk of change. The issue with HFT is that it doesnt take into account how interconnected the markets are or the fact that human emotion and logic still play a massive role in our markets.

A sell-off of millions of shares in the airline market could potentially scare humans into selling off their shares in the hotel industry, which in turn could snowball people into selling off their shares in other travel-related companies, which could then affect logistics companies, food supply companies, etc.

Take the Flash Crash of May 2010 as an example. Towards the end of the trading day, the Dow Jones plunged 1,000 points (more than $1 trillion in value) before rebounding towards normal levels just 36 minutes later. What caused this crash? A London-based trader named Navinder Singh Sarao first caused the crash and then it became exacerbated by HFT computers. Apparently Sarao used a spoofing algorithm that placed an order for thousands of stock index futures contracts betting that the market would fall. Instead of going through with the bet, Sarao was going to cancel the order at the last second and buy the lower priced stocks that were being sold off due to his original bet. Other humans and HFT computers saw this $200 million bet and took it as a sign that the market was going to tank. In turn, HFT computers began one of the biggest stock sell-offs in history, causing a brief loss of more than $1 trillion globally.

Financial HFT algorithms arent always correct, either. We view computers as the end-all-be-all when it comes to being correct, but AI is still really just as smart as the humans who programmed it. In 2012, Knight Capital Group experienced a glitch that put them on the verge of bankruptcy. Knights computers mistakenly streamed thousands of orders per second into the NYSE market causing mass chaos for the company. The HFT algorithms executed an astounding 4 million trades of 397 million shares in only 45 minutes. The volatility created by this computer error led to Knight losing $460 million overnight and having to be acquired by another firm. Errant algorithms obviously have massive implications for shareholders and the markets themselves, and nobody learned this lesson harder than Knight.

Many believe the only way to prevent or at least temper the most malicious AI from wreaking havoc is some sort of regulation.

I am not normally an advocate of regulation and oversight I think one should generally err on the side of minimizing those things but this is a case where you have a very serious danger to the public, Musk said at SXSW.

It needs to be a public body that has insight and then oversight to confirm that everyone is developing AI safely. This is extremely important.

Ford agrees with a caveat. Regulation of AI implementation is fine, he said, but not of the research itself.

You regulate the way AI is used, he said, but you dont hold back progress in basic technology. I think that would be wrong-headed and potentially dangerous.

Because any country that lags in AI development is at a distinct disadvantage militarily, socially and economically. The solution, Ford continued, is selective application:

We decide where we want AI and where we dont; where its acceptable and where its not. And different countries are going to make different choices. So China might have it everywhere, but that doesnt mean we can afford to fall behind them in the state-of-the-art.

Speaking about autonomous weapons at Princeton University in October, American General John R. Allen emphasized the need for a robust international conversation that can embrace what this technology is. If necessary, he went on, there should also be a conversation about how best to control it, be that a treaty that fully bans AI weapons or one that permits only certain applications of the technology.

For Havens, safer AI starts and ends with humans. His chief focus, upon which he expounds in his 2016 book, is this: How will machines know what we value if we dont know ourselves? In creating AI tools, he said, its vitally important to honor end-user values with a human-centric focus rather than fixating on short-term gains.

Technology has been capable of helping us with tasks since humanity began, Havens wrote in Heartificial Intelligence. But as a race weve never faced the strong possibility that machines may become smarter than we are or be imbued with consciousness. This technological pinnacle is an important distinction to recognize, both to elevate the quest to honor humanity and to best define how AI can evolve it. Thats why we need to be aware of which tasks we want to train machines to do in an informed manner. This involved individual as well as societal choice.

AI researchers Fei-Fei Li and John Etchemendy, of Stanford Universitys Institute for Human-Centered Artificial Intelligence, feel likewise. In a recent blog post, they proposed involving numerous people in an array of fields to make sure AI fulfills its huge potential and strengthens society instead of weakening it:

Our future depends on the ability of social- and computer scientists to work side-by-side with people from multiple backgrounds a significant shift from todays computer science-centric model, they wrote. The creators of AI must seek the insights, experiences and concerns of people across ethnicities, genders, cultures and socio-economic groups, as well as those from other fields, such as economics, law, medicine, philosophy, history, sociology, communications, human-computer-interaction, psychology, and Science and Technology Studies (STS). This collaboration should run throughout an applications lifecycle from the earliest stages of inception through to market introduction and as its usage scales.

Messina is somewhat idealistic about what should happen to help avoid AI chaos, though hes skeptical that it will actually come to pass. Government regulation, he said, isnt a given especially in light of failures on that front in the social media sphere, whose technological complexities pale in comparison to those of AI. It will take a very strong effort on the part of major tech companies to slow progress in the name of greater sustainability and fewer unintended consequences especially massively damaging ones.

At the moment, he said, I dont think the onus is there for that to happen.

As Messina sees things, its going to take some sort of catalyst to arrive at that point. More specifically, a catastrophic catalyst like war or economic collapse. Though whether such an event will prove big enough to actually effect meaningful long-term change is probably open for debate.

For his part, Ford remains a long-run optimist despite being very un-bullish on AI.

I think we can talk about all these risks, and theyre very real, but AI is also going to be the most important tool in our toolbox for solving the biggest challenges we face, including climate change.

When it comes to the near term, however, his doubts are more pronounced.

We really need to be smarter, he said. Over the next decade or two, I do worry about these challenges and our ability to adapt to them.

See the original post:
7 Risks Of Artificial Intelligence You Should Know | Built In

Artificial Intelligence Trends and Predictions for 2021 | AI Trending Now – Datamation

Artificial intelligence (AI) has taken on many new shapes and use cases as experts learn more about whats possible with big data and smart algorithms.

Todays AI market, then, consists of a mixture of tried-and-true smart technologies with new optimizations and advanced AI that is slowly transforming the way we do work and live daily life.

Read on to learn about some artificial intelligence trends that are making experts most excited for the future of AI:

More on the AI market: Artificial Intelligence Market

With its ability to follow basic tasks and routines based on smart programming and algorithms, artificial intelligence is becoming embedded in the way organizations automate their business processes.

AIOps and MLops are common use cases for AI and automation, but the breadth and depth of what AI can automate in the enterprise is quickly growing.

Bali D.R., SVP at Infosys, a global digital services and consulting firm, believes that AI is moving toward a certain level of hyper-automation, partially in response to the unexpected changes in manual data and procedures caused by the pandemic.

We are in the second inflection point for AI as it graduates from consumer AI, towards enterprise-grade AI, D.R. said. Being exposed to an over-reliance on manual procedures, such as mass rescheduling in the airline industry, unprecedented loan applications in banks, etc., the industries are now turning to hyper-automation that combines robotic process automation with modern machine learning to ensure they can better handle surges in the future.

Although AI automation is still mostly limited to interval and task-oriented automation that requires little imagination or guesswork on the part of the tool, some experts believe we are moving closer to more applications for intelligent automation.

David Tareen, director for artificial intelligence at SAS, a top analytics and AI software company, had this to say about the future of intelligent automation:

Intelligent automation is an area I expect to grow, Tareen said. Just like we automated manufacturing work, we will use AI heavily to automate knowledge work.

The complexity comes in because knowledge work has a high degree of variability. For example, an organization will receive feedback on their products or services in different ways and often in different languages as well. AI will need to ingest, understand, and modify processes in real-time before we can automate knowledge work at large.

AI, automation, and the job market: Artificial Intelligence and Automation

Because of the depth of big data and AIs reliance on it, theres always the possibility that unethical or ill-prepared data will make it into an AI training data set or model.

As more companies recognize the importance of creating AI that conducts its operations in a compliant and ethical manner, a number of AI developers and service providers are starting to offer responsible AI solutions to their customers.

Read Maloney, SVP of marketing at H2O.ai, a top AI and hybrid cloud company, explained what exactly responsible AI is and some of the different initiatives that companies are undertaking to improve their AI ethics.

AI creates incredible new opportunities to improve the lives of people around the world, Maloney said. We take the responsibility to mitigate risks as core to our work, so building fairness, interpretability, security, and privacy into our AI solutions is key.

Maloney said the market is seeing an increased adoption of the core pillars of responsible AI, which he shared with Datamation:

Companies are exploring several ways to make their AI more responsible, and most are starting with cleaning and assessing both data sets and existing AI models.

Brian Gilmore, director of IoT product management at InfluxData, a database solutions company, believes that one of the top options for model and data set management is distributed ledger technology (DLT).

As attention builds around the ethical and cultural impact of AI, some organizations are beginning to invest in ancillary but important technologies that utilize consensus and other trust-ensuring systems as a part of the AI framework, Gilmore said. For example, distributed ledger technology provides a sidecar platform for auditable proof of integrity for models and training data.

The decentralized ownership, distribution of access, and shared accountability of DLT can bring significant transparency to AI development and application across the board. The dilemma is whether for-profit corporations are willing to participate in a community model, trading transparency for consumer trust in something as mission critical as AI.

See more: The Ethics of Artificial Intelligence (AI)

Up to this point, AI has most frequently been used to optimize business processes and automate some home routines for consumers.

However, some experts are beginning to realize the potential that AI-powered models can have for solving global issues.

Read Maloney at H2O.ai has worked with people from a variety of industries to brainstorm how AI can be used for the greater good.

We work with many like-minded customers, partners, and organizations tackling issues from education, conservation, health care, and more, Maloney said. AI for good is fundamental to not only our work, including current work on climate change, wildfires, and hurricane predictions, but we are seeing more and more AI for good work to make the world a better place across the AI industry.

Some of the most exciting applications of altruistic AI are being implemented in early education right now.

For instance, Helen Thomas, CEO ofDMAI, an AI-powered health care and education company, offers an AI-powered product to ensure that preschool-aged children are getting the education they need, despite potential pandemic setbacks:

On top of pre-existing barriers to preschool education, including cost and access, recent research findings suggest children born during the COVID-19 pandemic display lower IQ scores than those born before January 2020, which means toddlers are less prepared for school than ever before.

DMAI DBA Animal Island Learning Adventure (AILA) is changing this with AI. [Our product] harnesses cognitive AI to deliver appropriate lessons in a consistent and repetitious format, supportive of natural learning patterns

Recognizing learning patterns that parents might miss, the AI creates an adaptive learning journey and doesnt allow the child to move forward until theyve mastered the skills and concepts presented. This intentional delivery also increases attention span over time, ensuring children step into the classroom with the social-emotional intelligence to succeed.

More on this topic: How AI is Being Used in Education

Internet of Things (IoT) devices have become incredibly widespread among both enterprise and personal users, but what many tech companies still struggle with is how to gather actionable insights from the constant inflow of data from these devices.

AIoT, or the idea of combining artificial intelligence with IoT products, is one field that is starting to address these pools of unused data, giving AI the power to translate that data quickly and intelligently.

Bill Scudder, SVP and AIoT general manager at AspenTech, an industrial AI solutions company, believes that AIoT is one of the most crucial fields for enabling more intelligent, real-time business decisions.

Forrester has noted that up to 73% of all data collected within the enterprise goes unused, which highlights a critical challenge with IoT, Scudder said. As the volume of connected devices for example, in industrial IoT settings continues to increase, so does the volume of data collected from these devices.

This has resulted in a trend seen across many industries: the need to marry AI and IoT. And heres why: where IoT allows connected devices to create and transmit data from various sources, AI can take that data one step further, translating data into actionable insights to fuel faster, more intelligent business decisions. This is giving way to the rising trend of artificial intelligence of things or AIoT.

Decision intelligence (DI) is one of the newest artificial intelligence concepts that takes many current business optimizations a step farther, by using AI models to analyze wide-ranging sets of commercial data. These analyses are used to predict future outcomes for everything from products to customers to supply chains.

Sorcha Gilroy, data science team lead at Peak, a commercial AI solutions provider, explained that although decision intelligence is a fairly new concept, its already gaining traction with larger enterprises because of its detailed business intelligence (BI) offerings.

Decision intelligence is a new category of software that facilitates the commercial application of artificial intelligence, providing predictive insight and recommended actions to users, Gilroy said. It is outcome focused, meaning a solution must deliver against a business need before it can be classed as DI.

Recognized by Gartner and IDC, it has the potential to be the biggest software category in the world and is already being utilized by businesses across a variety of use cases, from personalizing shopper experiences to streamlining complex supply chains. Brands such as Nike, PepsiCo, and ASOS are known to be using DI already.

Read next: Top Performing Artificial Intelligence Companies

See the article here:
Artificial Intelligence Trends and Predictions for 2021 | AI Trending Now - Datamation

AI and Big Data Analytics: Executive Q&A with David Tareen of SAS – Datamation

The term artificial intelligence dates back to at least the 1950s, and yet, it seems that AI is still in its infancy, given its vast potential use cases and society-changing ceiling.

As AI experts develop a better understanding of both the big data models and applications of artificial intelligence, how can we expect to see the AI market, including machine learning (ML), change? Perhaps more crucially, how can we expect to see other industries transformed as a result of that change?

David Tareen, director for artificial intelligence at SAS, a top AI and analytics company, offered Datamation his insights into the current and future landscape of enterprise AI solutions:

At SAS, Tareen helps clients understand and apply AI and analytics. After 17 years in the IT industry and having been part of the cloud, mobile, and social revolutions in IT, he believes that AI holds the most potential for changing the world around us. In previous roles, Tareen led product and marketing teams at IBM and Lenovo. He has a masters degree in business administration from the University of North Carolina at Chapel Hill.

Datamation: How did you first get started in or develop an interest in AI?

Tareen: My first introduction to AI was in a meeting with a European government agency, which wanted to build a very large computer a supercomputer, so to speak that could perform a quintillion (1 followed by 18 zeros) calculations per second. I was curious what work this computer would be doing to require such fast performance, and the answers were fascinating to me. That was my first real introduction to AI and the possibilities it could unlock.

Datamation: What are your primary responsibilities in your current role?

Tareen: My primary role at SAS is to improve understanding of AI and analytics and what benefits these technologies can deliver. The AI market segment is noisy, and it is often difficult for clients to understand fact from fiction when it comes to AI. I help our customers understand where AI and analytics can benefit them and exactly how the process will work.

Datamation: What makes SAS a unique place to work?

Tareen: SAS is unlike any other organization. I would say what sets us apart is a deep-seated desire to prove the power of AI and analytics. We are convinced that AI and any of the underlying AI technologies, such as deep learning, conversational AI, computer vision, natural language processing, and others, can have a positive impact on not only our customers and their organizations, but on the world as well. And we are on a mission to showcase these benefits through our capabilities. This relentless and singular focus sets us apart.

More on analytics: Data Analytics Market Review

Datamation: What sets SAS AI solutions or vision apart from the competition?

Tareen: There are two areas that make our AI capabilities unique:

First is a focus on the end-to-end process. AI is more than about building machine or deep learning models. It requires data management, modeling, and finally being able to make decisions from those models. Over the years, SAS has tightly integrated these capabilities, so that an organization can go from questions to decisions using AI and analytics.

However, our customers often need more than one analytics method to solve a problem. Composite AI is a new term coined by Gartner that aligns with what we have traditionally called multidisciplinary analytics. These methods range from machine learning, deep learning, computer vision, natural language, forecasting, optimization, and even statistics. Our ability to provide all these methods to our customers helps them solve any challenge with AI and analytics.

Datamation: What do you think makes an AI product or service successful?

Tareen: The key to making an AI product or service successful is to deliver real-world results. In the past, organizations would have little to show for their AI investments because of the hyper-focus on model building and model performance. Today, there is a better understanding that for an AI product or service to be successful, it has to have all the other elements that will help make an outcome better or a process faster or cheaper.

Datamation: What is an affordable/essential AI solution that businesses of all sizes should implement?

Tareen: An absolute must for businesses of any size is a better understanding of their customers. AI is becoming an essential tool to accomplish this. The ability to communicate with a customer the way they like, at the right time and the right place, with the right message and the right offer (as well as making those predictions without compromising data privacy regulations) that is an essential solution that all businesses, regardless of their size, should implement.

Datamation: How does AI advance data analytics and other big data strategies?

Tareen: With large volumes of data, applying AI to the data itself is a must. AI capabilities can help untangle elements within data, so it can be used to make decisions. For example, we now use AI to recognize information within large data sets and then organize them in accordance with company policy or local regulations. At SAS, we use AI to spot potential privacy issues, lack of diversity, or even errors within big data. Once these issues are identified, they can be managed and then automated, so that new data coming into the database will automatically get the same treatment as it is recognized by AI.

Also read: Artificial Intelligence Market

Datamation: What do you think are some of the top trends in artificial intelligence right now?

Tareen: In terms of whats trending in AI, generally there is a lot more maturity when it comes to approaching productive deployments for AI across industries. Gone are the days of investing in building the perfect model. The focus now is on the broader ecosystem needed to deliver AI projects and realize enhanced value. This broader ecosystem includes investing in data management capabilities and deploying and governing AI and analytical assets to ensure they deliver value. Organizations that look at AI beyond just model development will be more productive with their AI initiatives.

Additionally, the notion that AI should be used for unique breakthrough projects has evolved. Now organizations find value in applying AI techniques to established projects to achieve best-in-class results. For example, manufacturers with good quality discipline can save significant costs by applying computer vision to existing processes. Another example is retailers that use machine learning techniques to improve forecasts and save on inventory and product waste costs.

Datamation: What subcategories of artificial intelligence are most widely used, and how are they currently used?

Tareen: AI is really a set of different technologies, such as machine learning, deep learning, computer vision, natural language, and others. All these technologies are finding success in different industries and across different parts of organizations.

Machine learning and deep learning are two areas seeing broadest use with the most promising results. ML can detect patterns in the data and make predictions without being told what to look for. Deep learning does the same but gets better results with bigger and more complex data (e.g., video, images). As these capabilities are being applied to traditional approaches of segmenting, forecasting, customer service, and other areas, organizations find they get better results with AI technologies.

Datamation: What industry (or industries) do you think does a good job of maximizing AI in their operations/products? What do you think they do well?

Tareen: Businesses need to think of AI as more than one technology. Just like people use different senses (e.g., listening, seeing, calculating, imagining) to make decisions, AI can make better decisions when used in a composite way. The most productive organizations combine AI capabilities of computer vision, natural language, optimization, and machine learning into solutions and workflows, which leads to better decisions than their competitors.

Manufacturers are using computer vision to identify quality issues and reduce waste. Banks are having success using conversational AI and natural language processing to improve marketing and sales. Retailers are having success using machine learning in forecasting techniques. As AI gets broader adoption, we should expect to see organizations use a mix of AI capabilities for improved outcomes and different business units and areas.

Datamation: How has the COVID-19 pandemic affected you/your colleagues/your clients approach to artificial intelligence?

Tareen: The pandemic upended expected business trajectories and exposed the weaknesses in machine learning systems dependent on large amounts of representative historical data, including well-bounded and reasonably predictable patterns. As a result, there is a business need to reinforce the analytics core and bolster investments in traditional analytics teams and techniques better suited to rapid data discovery and hypothesizing.

As companies adapt to the new normal, one of the primary questions were asked is how to retrain AI models with a more diverse data set. When COVID hit, the analytical models making good predictions started underperforming. For example, airports use SAS predictive modeling to understand and improve aircraft traffic flow. However, these models had to be retrained and additional data sources added before the models could start accurately predicting the new normal traffic pattern.

More on this topic: How COVID-19 Is Driving Digital Transformation

Datamation: What do you think well see more of in the AI space in the next 5-10 years? What areas will grow the most over the next decade?

Tareen: A complex area where I hope to see growth over the next 5-10 years has large implications for the world: AI algorithms becoming more imaginative. Imagination is something that comes very easily to us humans. For example, a child can see a table as both a table and a hiding place to use when playing a game of hide-and-go-seek. The process of imagination is very complex for an AI algorithm to learn from one data domain and apply that learning to a different data domain. Transfer learning is a start, however, and as AI gets better at imagination, it will have the potential to better diagnose disease or spot root causes of climate change. I hope this is an area that will grow in the next decade.

Datamation: What does AI equity mean to you? How can more businesses get started in AI development or product use?

Tareen: From inception to now, AI has been used exclusively by subject matter experts like data scientists. Todays trend is to lessen that need for subject matter experts to instead cascade the benefits of AI to the masses recognizing the global value from the wide-reaching benefits rather than isolated benefits realized by a select few. The targets for democratized AI include customers, business partners, the sales force, factory workers, application developers, and IT operations professionals, among others.

There are a couple of ways enterprises can push AI to a broader audience: simplify the tools and make them more intuitive. First, conversational AI helps because it makes interacting with AI simpler. You dont have to build complex models, but you can gain insights from your data by talking with your analytics. The second initiative is to make AI easier to consume by everyone. This means taking your data and algorithms to the cloud to improve accessibility and reduce costs.

Some leaders are surprised to learn that democratizing AI involves more than the process itself. Often culture tweaks or an entire cultural change must accompany the process. Leaders can practice transparency and good communication in their democratization initiatives to address concerns, adjust the pace of change, and successfully complete embedding AI and analytics for everybodys use.

More on AI equity: AI Equity in Business Technology: An Interview With Marshall Choy of SambaNova Systems

Datamation: What are some ethical considerations for the market that should be part of AI development?

Tareen: There are numerous ethical considerations that should be part of AI development. These considerations range from data to algorithms to decisions.

For data, it is important to ensure that the data accurately represents the populations for which you are making decisions. For example, a data set should not under-represent genders or exclude low-income populations. Other ethical considerations include preserving privacy and Personal Identifiable Information.

For algorithms, it is important to be able to explain decisions using plain language. A complex neural network may make accurate predictions, but the outcomes must be easily explainable to both data scientists as well as non-technologists. Another consideration is ensuring models are not biased when making predictions.

For decisions, it is important to ensure that controls are in place not only when models are implemented, but that decisions are monitored for transparency and fairness throughout their life cycle.

More on AI and ethics: The Ethics of Artificial Intelligence (AI)

Datamation: How have you seen AI innovations change since you first started? How have the technologies, services, conversations, and people changed over time?

Tareen: There have been many changes, but one shift has been fundamental. AI used to be overly focused on model building and model performance. Now, there is a realization that to deliver results, the focus must be on other areas as well, such as managing data, making decisions, and governing those decisions. Topics such as bias in data or models are starting to become common in conversations. These are signs of a market that is starting to understand the potential, and challenges, of this technology.

More on data and bias: Addressing Bias in Artificial Intelligence (AI)

Datamation: How do you stay knowledgeable about trends in the market? What resources do you like?

Tareen: My top two places to better understand trends are:

Datamation: How do you like to help or otherwise engage less experienced AI professionals?

Tareen: The key is to describe advanced AI capabilities in ways that are easily relatable and finding examples of customers we have helped in their specific industry.

Datamation: What do you like to do in your free time outside of work?

Tareen: One of the benefits of #saslife is work-life balance. I am a private pilot and fly a small aircraft out of Raleigh-Durham International Airport. North Carolina is a pretty state to fly over, so I take as many opportunities as possible to see this beautiful state from the air.

Datamation: If you had to work in any other industry or role, what would it be and why?

Tareen: My ideal role would be one where I can tell real stories about how technologies such as AI and analytics can improve the world around us. Currently, a lot of the work that SAS does, particularly around our Data4Good initiative, fulfills that goal well.

Datamation: What do you consider the best part of your workday or workweek?

Tareen: The interaction with SAS customers is almost always the best part of the workday or workweek. At SAS, we start off every customer meeting with a listening session where we get to hear about their world, their challenges, and what they hope to accomplish. It is an exciting learning process and often the best part of my week.

Datamation: What are you most proud of in your professional/personal life?

Tareen: I am most proud of the work that SAS does around social innovation. Our Data4Good initiative projects are a great way to apply data science, AI, and analytics to big challenges, both at the personal level as well as the global level, to improve the human experience.

Read next: Top Performing Artificial Intelligence Companies

Read the original here:
AI and Big Data Analytics: Executive Q&A with David Tareen of SAS - Datamation