This Robotic Combat Vehicle Will Use Artificial Intelligence to Save Soldiers – The National Interest

The Army is planning to armthe newten-ton Robotic Combat Vehicle-Medium(RCV-M)withthirty-millimeterchain gun cannons, anti-tank missiles and remotely operated guns. This will enable it to conductdirect-attack missionsso thatsoldierswont have to beinthe lineofenemy fire.

Armyanddefenseindustryofficials havestatedthat humans willcontinuetomakedecisions about the use of lethal force.However, thisdoes notrestrictthem frompushing the envelope of autonomy and potentially enabling unmanned systems to process greater volumes of information and perform a widerrange of functionswithout needing human intervention.

We are looking at using unmanned vehicles to expand the network and expand the line-of-sight so we can push these robots out as far aspossible,so soldiers do not have to do that, Maj. Gen. Ross Coffman,the directorof theNext-Generation Combat Vehicle Cross-Functional TeamforArmy Futures Command, told reporters at the 2021 Association of the United States Army Annual Symposium.

TheRCV-Mwill becapable of usinga wide range of weapons, sensors and technologies.AJuly2021Congressional Research Servicereport on the vehicleshows that it willcarryJavelin anti-tank missiles and theXM813 Bushmaster chain gunas well as smoke obscuration measures, the report states. These may includeamphibious kits, electronic warfare (EW) modules, counter Unmanned Aerial System (UAS)systems, and nuclear, radiological, biological, and chemical sensors,according tothe report.

Lethal direct fire missions, such as using a Javelin or Bushmaster Chain Gun, will be closely monitored byArmy and defense industry officials.Also, the RCV-Mcouldhostnonlethal defensive interceptorsand use them to deterincoming munitionsorautonomous launch and recovery of surveillance drones.Additionally, it will be able toaccommodate a wide range of payloads and potential hardware configurationslike clearing minefields.For several years,Army Futures Command has been experimenting withusingrobotic vehiclestoclear minefields or breach obstaclesthat might preventarmored columnsfrom conducting a mission.Thiswill allowsoldiers to operate at a safe standoff distance while robotic vehiclestake major risks.

What we learned is based on their mobility, their excellent mobility and their autonomous behaviors, we can actually have them move on a separate axis of advance and link up with the humans on the objective, Coffman told the National Interest.So,they can autonomously move without humans, link up with the humans, transfer back control, and then execute the mission. This gives the enemy multiple dilemmas.

Kris Osborn is the defense editor for theNational Interest. Osborn previously served at the Pentagon as a Highly Qualified Expert with the Office of the Assistant Secretary of the ArmyAcquisition, Logistics & Technology. Osborn has also worked as an anchor and on-air military specialist at national TV networks. He has appeared as a guest military expert on Fox News, MSNBC, The Military Channel, and The History Channel. He also has aMasters Degreein Comparative Literature from Columbia University.

Image: Reuters

Go here to see the original:
This Robotic Combat Vehicle Will Use Artificial Intelligence to Save Soldiers - The National Interest

Artificial Intelligence, Automation and The Future of Corporate Finance – PRNewswire

NASHVILLE, Tenn., Nov. 1, 2021 /PRNewswire/ --Algorithms rule the world or, at least, the world is headed that way. How can you prepare your company and its financial underpinnings not only to survive but also thrive under this new big data paradigm? In his new book, Deep Finance: Corporate Finance in the Information Age, author Glenn Hopper provides a clear guide for finance professionals and non-technologists who aspire to digitally transform their companies into modern, data-driven organizations streamlined for success and profitability.

Hopper, who comes to this subject armed with a unique background in finance and technology, contends that the finance department is perfectly placed to lead the digital revolution bringing companies of all sizes into a new era of efficiency while future-proofing the role of chief financial officer.

Deep Financeis written for a wide audience, ranging from those who don't know AI from A/R to those who are already working with data to drive business decisions. The book illuminates the path toward digital transformation with instructions on how finance professionals can elevate their leadership and become champions for data science.

InDeep Finance, readers will:

"In this Age of AI, every function in every company has to go through its own digital transformation to enable their organizations to succeed.Glenn Hopper provides an essential roadmap to accounting and finance executives on how to embrace analytics and AI as core tools for modern finance. This book should be a required reading for every general manager."

Karim R. Lakhani | Co-Author of Competing in the Age of AICo-Director of Laboratory for Innovation Science at Harvard and Co-Chair of Harvard Business Analytics Program

A former Navy journalist, filmmaker, and business founder, Hopper has spent the past two decades helping startups transition into going concerns, operate at scale, and prepare for funding and/or acquisition. He is passionate about transforming the role of CFO from a historical reporter and bookkeeper to a forward-looking strategist who is integral to a company's future. He has served as a finance leader in a variety of industries, including telecommunications, retail, Internet, and legal technology. He has a master's degree in finance with a graduate certificate in business analytics from Harvard University, and an MBA from Regis University.

Deep Financeis distributed by Simon & Schuster and will be available November 16, 2021, in eBook and print versions at Amazon, Barnes and Noble, and other online booksellers.

Contact:Glenn Hopper 615.756.7354[emailprotected]

SOURCE Glenn Hopper

More:
Artificial Intelligence, Automation and The Future of Corporate Finance - PRNewswire

Why testing must address the trust-based issues surrounding artificial intelligence – Aerospace Testing International

Words byJonathan Dyble

Aviation celebrates its 118th birthday this year. Over the years there have been many milestone advances, yet today engineers are still using the latest technology to enhance performance and transform capabilities in both the defence and commercial sectors.

Artificial Intelligence (AI) is arguably one of the most exciting areas of innovation and like many sectors, AI is garnering a great amount of attention in aviation.

Powered by significant advances in the processing power of computers, AI is today making aviation experts probe the opportunities of what was once seemingly impossible. It is worth noting that AI-related aviation transformation remains in its infant stages.

Given the huge risks and costs involved, full confidence and trust is required for autonomous systems to be deployed at scale. As a result, AI remains somewhat of a novelty in the aviation industry at present but attention is growing, progress continues to be made and the tide is beginning to turn.

One individual championing AI developments in aviation is Luuk Van Dijk, CEO and founder of Daedalean, a Zurich-based startup specializing in the autonomous operation of aircraft.

While Daedalean is focused on developing software for pilotless and affordable aircraft, Van Dijk is a staunch advocate of erring on the side of caution when it comes to deploying AI in an aviation environment.We have to be careful of what we mean by artificial intelligence, says Van Dijk. Any sufficiently advanced technology is indistinguishable from magic, and AI has always been referred to as the kind of thing we can almost but not quite do with computers. By that definition, AI has unlimited possible uses, but unfortunately none are ready today.

When we look at things that have only fairly recently become possible understanding an image for example that is obviously massively useful to people. But these are applications of modern machine learning and it is these that currently dominate the meaning of the term AI.

While such technologies remain somewhat in their infancy, the potential is clear to see.

Van Dijk says, When we consider a pilot, especially in VFR, they use their eyes to see where they are, where they can fly and where they can land. Systems that assist with these functions such as GPS and radio navigation, TCAS and ADS-B, PAPI [Precision approach path indicator], and ILS are limited. Strictly speaking they are all optional, and none can replace the use of your eyes.

With AI imagine that you can now use computer vision and machine learning to build systems that can help the pilot to see. That creates significant opportunities and possibilities it can reduce the workload in regular flight and in contingencies and therefore has the potential to make flying much safer and easier.

A significant reason why such technologies have not yet made their way into the cockpit is because of a lack of trust something that must be earned through rigorous, extensive testing. Yet the way mechanical systems and software is tested is significantly different, because of an added layer of complexity in the latter.

For any structural or mechanical part of an aircraft there are detailed protocols on how to conduct tests that are statistically sound and give you enough confidence to certify the system, says Van Dijk. Software is different. It is very hard to test because the failures typically depend on rare events in a discrete input space.

This was a problem that Daedalean encountered in its first project with the European Union Aviation Safety Agency (EASA), working to explore the use of neural networks in developing systems to measurably outperform humans on visual tasks such as navigation, landing guidance, and traffic detection.While the software design assurance approach that stems from the Software Considerations in Airborne Systems and Equipment Certification (DO-178C) works for more traditional software, its guidance was deemed to be only partially applicable to machine learned systems.

Instead of having human programmers translating high level functional and safety requirements into low-level design requirements and computer code, in machine learning a computer explores the design space of possible solutions given a very precisely defined target function that encodes the requirements, says Van Dijk.

If you can formulate your problem into this form, then it can be a very powerful technique, but you have to somehow come up with the evidence that the resulting system is fit for purpose and safe for use in the real world.

To achieve this, you have to show that the emergent behavior of a system meets the requirements. Thats not trivial and actually requires more care than building the system in the first place.

From these discoveries, Daedalean recently developed and released a joint report with EASA in the aim of maturing the concept of learning assurance and pinpointing trustworthy building blocks upon which AI applications could be tested thoroughly enough to be safely and confidently incorporated into an aircraft. The underlying statistical nature of machine learning systems actually makes them very conducive to evidence and arguments based on sufficient testing, Van Dijk confirms, summarizing the findings showcased in the report.

The requirements to the system then become traceable to the requirements on the test data you have to show that your test data is sufficiently representative of the data you will encounter during an actual flight.For that you must show that you have sampled any data with independence a term familiar to those versed in the art of design assurance, but something that has a much stricter mathematical meaning in this context.

Another person helping to make the strides needed to make the use of AI in the cockpit a reality is Dan Javorsek, Commander of Detachment 6, Air Force Operational Test and Evaluation Center (AFOTEC) at Nellis Air Force Base in Nevada. Javorsekt is also director of the F-35 US Operational Test Team and previously worked as a program manager for the Defense Advanced Research Projects Agency (DARPA) within its Strategic Technology Office.

Much like Van Dijk, Javorsek points to trust as being the key element in ensuring potentially transformational AI and automated systems in aircraft becoming accepted and incorporated more into future aircraft. Furthermore he believes that this will be hard to achieve using current test methods.Traditional trust-based research relies heavily on surveys taken after test events. These proved to be largely inadequate for a variety of reasons, but most notably their lack of diagnostics during different phases of a dynamic engagement, says Javorsek.

As part of his research, Javorsek attempted to address this challenge directly by building a trust measurement mechanism reliant upon a pilots physiology. Pilots attentions were divided between two primary tasks concurrently, forcing them to decide which task to accomplish and which to offload to an accompanying autonomous system.

Through these tests we were able to measure a host of physiological indicators shown by the pilots, from their heart rate and galvanic skin response to their gaze and pupil dwell times on different aspects of the cockpit environment, Javorsek says.

As a result, we end up with a metric for which contextual situations and which autonomous system behaviors give rise to manoeuvres that the pilots appropriately trust.

However a key challenge that Javorsek encountered during this research was related to the difficulty machines would have in assessing hard to anticipate events in what he describes as very messy military situations.

Real world scenarios will often throw up unusual tactics and situations, such as stale tracks and the presence of significant denial and deception on both sides of an engagement. In addition electronic jammers and repeaters are often used attempt to mimic and confuse an adversary.

This can lead to an environment prone to accidental fratricide that can be challenging for even the most seasoned and experienced pilots, Javorsek says. As a result, aircrews need to be very aware of the limitations of any autonomous system they are working with and employing on the battlefield.

It is perhaps for these reasons that Nick Gkikas, systems engineer for Airbus Defence and Space, human factors engineering and flight deck, argues that the most effective use of AI and machine learning is outside the cockpit at present. In aviation, AI and machine learning is most effective when it is used offline and on the ground in managing and exploiting big data from aircraft health and human-in / on-the-loop mission performance during training and operations, he says.

In the cockpit, most people imagine the implementation of machine learning as an R2D2 type of robot assistant. While such a capability may be possible today, it is currently still limited by the amount of processing power available on-board and the development of effective human-machine interfaces with machine agents in the system.

Gkikas agrees with Javorsek and Van Dijk in believing that AI currently hasnt be sufficiently developed to be part of the cockpit in an effective and safe manner. Until such technologies are more advanced, effectively tested, and able to be powered by an even greater sophistication in computing power, it seems AI may be better placed to be used in other aviation applications such as weapons systems.

Javorsek also believes it will be several years before AI and machine learning software will be successful in dynamically controlling the manoeuvres of fleet aircraft traditionally assigned to contemporary manned fighters. However, there is consensus amongst experts that there is undoubted potential for such technologies to be developed further and eventually incorporated within the cockpit of future aircraft.

For AI in the cockpit and in aircraft in general, I am confident we will see unmanned drones, eVTOL aircraft and similarly transformative technologies being rolled out beyond test environments in the not-so-distant future, concludes Van Dijk.

Read this article:
Why testing must address the trust-based issues surrounding artificial intelligence - Aerospace Testing International

Drive-thrus may soon use artificial intelligence and facial recognition – Deseret News

I live in a small town and am thrilled when the clerk at the local convenience store knows me so well that he can predict my drink order before I speak. But if every fast-food restaurant in the country also knew my drink order when I hit the drive-thru, would I get the same satisfaction or be creeped out?

We could soon find out because McDonalds has been testing drive-thru ordering using artificial intelligence. In some locations, the company is scanning license plates (with the customers permission) to help personalize the experience and predict orders, according to CNN Business. McDonalds CEO Chris Kempczinski told CNBC that 10 locations in Chicago are using voice-assistants to take orders in the drive-thru lanes and they are seeing about 85% accuracy.

The concept could spread globally sooner than you might think. Last week, McDonalds announced in a statement it would be partnering with IBM to further accelerate the development and deployment of its Automated Order Taking technology.

Jamie Richardson, a vice president with burger chain White Castle, told CNN Business its using artificial intelligence at an Indiana location, hoping it helps guarantee the visit is positive.

The thought is to make sure that its friendly. They remember me, they know who I am, he said.

And last year, amid COVID-19 and tight restrictions in California, several restaurants started offering face-pay for a completely contactless experience. PopPay is a service that allows users to connect a credit card and selfie to their account and then have their face scanned at participating locations to pay. There are dozens of restaurants and retailers in the Pasadena area offering the service at a kiosk, drive-thru or at the counter. The customer can also link loyalty accounts to their profile and receive a text when the transaction is complete.

A Russian bank currently offers a similar feature at some supermarkets in the country. Reuters reports Sberbank plans to offer facial recognition payments in 100 grocery locations and that a transaction would only take three seconds compared to 34 seconds when paying with cash and 15 seconds when using a payment card.

While this all may seem extremely convenient, many are concerned about privacy.

One Illinois man is suing McDonalds for using voice recognition technology in the drive-thru. Restaurant Business reported Shannon Carpenters lawsuit claims McDonalds violated the states Biometric Information Privacy Act, which requires companies get consent before collecting biometric information.

Some schools in England had hoped to use facial recognition in its lunchrooms for contactless payments and to get kids through the line faster. But the BBC reported that the schools in North Ayrshire have decided to push pause for now after the U.K.s Information Commissioners Office suggested the technology could be intrusive.

Facial recognition opportunities arent all about food, either. The travel industry is getting in on it as well.

Delta Airlines travelers in Atlanta will have the choice of showing their face to a camera instead of presenting an ID or handing over a boarding pass. To participate, passengers will need to opt in and store their TSA PreCheck membership and a SkyMiles number in the Fly Delta app.

At the airport, they can look into a camera at bag drop, security checkpoints and the gate instead of showing a boarding pass or ID. The image is encrypted and sent to U.S. Customs and Border Protections facial biometric matching service. This is completely voluntary and will roll out for security in the coming weeks and for checking bags and boarding before the end of the year.

In Russias capital, the Moscow Metro is using Face Pay at all its stations, according to The Moscow Times. After passengers use the app to connect their photo, credit card and transit card, they can simply look into a camera to enter.

Not everyone is excited about the rapid rise of facial recognition technology. A recent article from the National Law Review outlined several concerns, including hacking possibilities, accuracy issues and racial bias in some facial recognition algorithms.

If you or someone you know is concerned about too much facial recognition technology in the world, heres the perfect stocking stuffer. Anti-facial recognition glasses Reflectacles have infrared blocking lenses and reflective frames designed to fool any facial recognition system that comes your way. They may bring you some peace of mind, but itll likely take you longer to get through airport security than the woman next to you letting TSA scan her face.

Follow this link:
Drive-thrus may soon use artificial intelligence and facial recognition - Deseret News

Maureen Dowd: What will a world of artificial intelligence look like? – Salt Lake Tribune

The first time I interviewed Eric Schmidt, a dozen years ago when he was the CEO of Google, I had a simple question about the technology that has grown capable of spying on and monetizing all our movements, opinions, relationships and tastes.

Friend or foe? I asked.

We claim were friends, Schmidt replied coolly.

Now that the former Google executive has a book out Tuesday on The Age of AI, written with Henry Kissinger and Daniel Huttenlocher, I wanted to ask him the same question about AI: Friend or foe?

AI is imprecise, which means that it can be unreliable as a partner, he said when we met at his Manhattan office. Its dynamic in the sense that its changing all the time. Its emergent and does things that you dont expect. And, most importantly, its capable of learning.

It will be everywhere. What does an AI-enabled best friend look like, especially to a child? What does AI-enabled war look like? Does AI perceive aspects of reality that we dont? Is it possible that AI will see things that humans cannot comprehend?

I agree with Elon Musk that when we build AI without a kill switch, we are summoning the demon and that humans could end up, as Steve Wozniak said, as the family pets. (If were lucky.)

Talking about the alarms raised by the likes of Musk and Stephen Hawking, Schmidt said that they think that by unleashing AI, eventually, youll end up with a robot overlord thats 10 or 100 or 1,000 times smarter than the humans. My answer is different. I think all the evidence is that these AI systems are going to think, not like humans, but theyre going to be very smart. Were going to have to coexist.

You dont think Siri and Alexa are going to kill us one night?

No, he said. But they might become your childs best friend.

Opinions on AI are wildly divergent. Jaron Lanier, the father of virtual reality, rolls his eyes at the digerati in Silicon Valley obsessed with the science-fiction fantasy of AI.

It can sometimes become a giant, false god, he told me. Youve got these nerdy guys who have an awful reputation for how they treat women, who get to be the life creators. You women with your petty little biological wombs cant stand up to us. Were making the big life here. Were the supergods of the future.

We have known for a while that Silicon Valley is taking us down the drain. Preposterous claims that once could not have gotten traction on everything from Democratic pedophilia rings to rigged elections to vaccine conspiracy theories now spread at the speed of light. Teenage girls can be sent spiraling into depression by the glossy, deceptive world of Instagram, owned by the manipulative and greedy company formerly known as Facebook.

Schmidt said an Oxford student told him, about social media poison, The union of boredom and anonymity is dangerous. Especially at the intersection of addiction and envy.

The question of whether we will lose control to AI may be passe. Technology is already manipulating us.

Schmidt admits that the lack of foresight among the lords of the cloud about where technology was headed was foolish.

Ill say, 10 years ago, when I worked really hard on these social networks, maybe this is just naivet, but we never thought that governments would use them against citizens, like in 2016, with interference from the Russians.

We didnt think it would then stitch these special interest groups together with these violently strong belief systems. No one ever discussed it. I dont want to make the same mistake again with a new foundational technology.

He said the National Security Commission on Artificial Intelligence, which he chaired this year, concluded that America is still a little bit ahead of China in the technology race but that China is overinvesting against us. The authors write that they are most worried about other countries developing AI-facilitated weapons with substantial destructive potential that may be able to adapt and learn well beyond their intended targets.

The first thing for us to look at between the U.S. and China is to make sure that theres no Dr. Strangelove scenario, a launch on a warning, to make sure theres time for human decision making, he said. Lets imagine youre on a ship in the future and the little computer system says to the captain, You have 24 seconds before youre dead because the hypersonic missile is coming at you. You need to press this button now. You want to trust the AI, but because of its imprecise nature, what if it makes a mistake?

I asked if he thought Facebook could leave its troubles behind by changing its name to Meta.

The problem is, what do you now call FAANG stocks? MAANG? he said of the biggest tech stocks Facebook, Apple, Amazon, Netflix and Google. Google changed its name to Alphabet, and yet, Google was still Google.

And whats with that creepy metaverse that Mark Zuckerberg is trying to lure us into?

All of the people who talk about metaverses are talking about worlds that are more satisfying than the current world youre richer, more handsome, more beautiful, more powerful, faster. So, in some years, people will choose to spend more time with their goggles on in the metaverse. And who gets to set the rules? The world will become more digital than physical. And thats not necessarily the best thing for human society.

Schmidt said his book poses questions that cannot yet be answered.

Unfortunately for us, we wont know the answers until it is too late.

Maureen Dowd(CREDIT: The New York Times)

Maureen Dowd is a Pulitzer Prize-winning columnist for The New York Times.

See the original post here:
Maureen Dowd: What will a world of artificial intelligence look like? - Salt Lake Tribune

Artificial intelligence: The new face of education – Economic Times

Artificial Intelligence is a field of study designed to infuse the power of thinking into machines. It can allow machines to understand our day-to-day activities and try to replicate them with utmost simplicity. From understanding our natural language to optimising our present solutions, AI can be used in almost all industries in general. In general, AI entails taking aspects of human intellect and applying them as algorithms in a computer-friendly manner. The output of AI is present in the form of text-to-speech services and voice assistants, which can simplify the easiest of tasks, such as calling or even texting someone. Such a versatile tool, AI is set to dominate the world's leading industries. As a result, AI must investigate how the human mind 'thinks, develops, and makes decisions' when attempting to resolve issues or carry out a project. The goal of AI is to advance technology by including functions related to human behaviours such as reasoning, learning, and problem-solving.

AI in schoolsAI is already a part of host of different ecosystems such as hospitals, factories and scientific laboratories etc., but the most striking use of AI is in the business arena; from our super-fast food delivery apps to our responsive cab drivers, most of these latest services are integrated with AI. But, the notion of AI in schools is often overlooked. It is usually thought of replacing teachers and appointing some boring robots. But what exactly needs to be made is a system, where teachers and online bots can coexist, and create learning personalized for each student. Personalised learning refers to giving each student, a unique experience of learning, which supposedly begins with their doubts and queries being resolved, and then focusing on improving their academic performance.

Back in the day, when Google, the search engine, was introduced, it identified the keywords, used for the query, and then displayed the results, which matched with the keywords. But, the latest softwares (voice assistants, mainly) use the identification and processing of intents, which is a much more effective and a smarter solution. With this principle and methodology in mind, we can move ahead and look at the use of AI in this budding sector. Similar to the principle behind voice assistants, if we can find the intention of the query of a particular student, that is half battle won. With that said, identifying the lesson or sub-topic, from which the question is being asked is of paramount importance. Then, the assistant (either voice or chat) can be loaded with solutions, of some of the major subtopics and concepts, which should cover the major part of the query. In case the query is a bit out of the box, we can tackle it by using a closure statement, which should also include useful links and documents, to help them solve their queries. Since we are talking about a digital extension to classrooms, we can include some external links to videos or even notes for the chapter, for the students to clarify it as much as they can by themselves.

The pandemic actually gave us a whole new perspective on online learning. With the online meets replacing actual classes, both the students and the teachers now understand the working of the online classes very well. But now, with the pandemic on decline, we should not neglect the online mode of learning, instead nurture it with the use of AI. Online classrooms, if permanent, can easily result in the improvement of grades as well as the understanding of concepts, and can also act an additional platform, which they can look into, whenever needed. Though these are just some ideas and ways of utilising our resources to the best, we still have a long way to go, as far as developing AI technologies in India is concerned. It is not something that just sits in Tony Starks lab, and can simulate time travel, it is something that requires days and months to be developed, and even years to be implemented in each and every corner of our country.

Amit Kapoor is chair, Institute for Competitiveness, India and visiting scholar, Stanford University. Praveen Senthil is researcher at large at Institute for Competitiveness.

Read the rest here:
Artificial intelligence: The new face of education - Economic Times

New research center at UMass Amherst will use Artificial Intelligence to improve at-home care for elderly patients – The Boston Globe

Whats next for the ever-developing industry of heartbeat-monitoring wristwatches and voice-responsive phones? According to the leaders of a newly endowed Massachusetts research center, the devices built-in Artificial Intelligence could prove useful in improving the quality of at-home care for the elderly.

Unveiled this week, the Massachusetts AI and Technology Center for Connected Care in Aging and Alzheimers Disease, based at the University of Massachusetts Amherst, will work with a confluence of new and existing technologies, drawing on AI to modernize the at-home care industry for those with age-related ailments and Alzheimers Disease patients.

The project seeks to address what its founders see as a major healthcare disparity that can leave the elderly with a vexing choice: stay home and receive a lower level of care, or leave home for proper treatment.

More than 90% of older Americans would prefer to stay in their homes as they age, a press release announcing the new center said. However, the prevalence of chronic illness, including Alzheimers disease, can make the goal of successful aging at home out of reach without substantial support.

Computer scientists and doctors from Brigham and Womens Hospital, Massachusetts General Hospital, Brandeis University, and Northeastern University will partner on the research, which will be funded by roughly $20 million in grants from the National Institute on Aging distributed over the next five years.

Artificial intelligence has the potential to transform important areas of science and medicine, but there is a critical need to bring the power of AI to the patients, caregivers and clinicians who need it most, Paul Anderson, senior vice president of research and education at Brigham and Womens Hospital, said in the press release. This grant will allow experts from across our state to come together to help address this key gap.

If successful, the research would utilize AI to deliver, manage, and adapt treatment and intervention regimes for those with age afflictions. So what does that look like?

A key component, said Deepak Ganesan, a professor in UMass Amhersts Robert and Donna Manning College of Information and Computer Sciences, will be improving on the technologies that already exist in devices like smartphones and Apple Watches.

[We] may look at leveraging existing mobile and wearable devices such as smartphones in new ways, he said in an email. For example, voice-based interaction using a smartphone may be used to look at changes in the voice patterns that can be used to detect subtle changes in cognitive and physical function for patients with Alzheimers.

Devices like Apple Watches and Fitbits, which track the steps of the wearer, can be inaccurate when used by older users, he said, because they are not calibrated to track lower speeds. And new sleep trackers can lose accuracy in users with sleep disorders or who wake up to take medications.

Some of the focus will be on adapting the algorithms such that they can be more accurate when monitoring older adults with a range of impairments, Ganesan said.

The center will also work with new technologies, he said, like devices that allow for patient monitoring without requiring them to wear anything.

And a key component of the research will be distilling the data gathered from patient cohorts and presenting it to patients, caregivers, and clinicians in a digestible way. Together, the adapted technologies and data could create a new system for monitoring elderly patients who want to remain home that sends help when its needed.

Its a difficult problem to develop AI-enhanced sensing technologies that work for people where they are, Ganesan said in the press release. How do you get good, useful data? How do you analyze this data and present it to the patient, caregiver and clinician? And then how can you intervene in a timely manner when a problem develops?

Andrew Brinker can be reached at andrew.brinker@globe.com. Follow him on Twitter at @andrewnbrinker.

Here is the original post:
New research center at UMass Amherst will use Artificial Intelligence to improve at-home care for elderly patients - The Boston Globe

FEATURE: How is artificial intelligence changing these five industries? – Nantwich News

Technology is growing at an exponential rate.

Smart devices are integrated into our everyday activities from your homes heating system to the coffee machine.

Artificial intelligence, also known as AI, has developed so quickly that its hard to keep track.

Many of us encounter AI on a daily basis without even realising it.

Technology has transformed all kinds of industries in recent years from retail to public transport.

AI includes robotics, machine learning, automation, natural language processing and much more.

Lets take a closer look at how AI has impacted these five industries.

Education

AI does not suffer from human bias. It can analyse the profiles of children and produce challenges and solutions for each child.

Of course, a good teacher could do the same thing but it would take much longer.

AI is far more efficient and less likely to make a mistake.

AI plays a big role in the development of children these days and can help us identify learning difficulties.

We can also personalise teaching methods through AI. Everyone learns and tests differently.

With AI, we can adapt the classroom to each student and provide a bespoke learning experience.

Retail

Artificial intelligence can streamline processes and improve customer service.

We have all experienced the frustration of talking to a customer service robot.

In the future, AI will only enhance the customer service experience, and you will still get to talk to real people.

Hopefully, it will help you to access information much more easily and contact customer service reps.

Healthcare

There are likely to be more robots in surgery and virtual nurses.

Sounds terrifying, right? AI will make diagnoses, perform procedures and automate medication services.

Healthcare will become much more efficient, and hopefully, there will be fewer medical negligence cases.

Construction

AI is already embedded in construction power tools.

It can tell you the battery level, temperature and whether anything is broken within the tool.

AI can reduce the number of risks on construction sites and help workers to use tools safely.

But the benefits of machine learning dont stop at safety management.

Director of Product for Milwaukee Power Tools, Steve Matson, commented: There is an interesting runway in terms of what we can do with the machine learning model when applied to locations.

The company has been incorporating new location technology into their tools, making them easier to find. Matson added There is a little bit more secret sauce on the horizon as it pertains to tools.

Public transport

AI analyses the data and best routes available for public transport systems.

You can plan out your journey with the help of artificial intelligence. It will calculate traffic delays, accidents and any roadworks on your journey.

People are far more likely to use public transport when they know exactly where to go and what service to get.

Say goodbye to scanning bus timetables, and hello to the new world of public transport.

Artificial intelligence has greatly benefited the modern world and improved the efficiency of numerous sectors.

Do your research and find out if AI can enhance your life today.

(Pic by mikemacmarketing)

See the rest here:
FEATURE: How is artificial intelligence changing these five industries? - Nantwich News

Renowned Intelligent Speech and Artificial Intelligence Public Listed Company, iFLYTEK Enters a Memorandum of Understanding With Enterprise Singapores…

SINGAPORE, November 03, 2021--(BUSINESS WIRE)--With a market capitalization of US$19 billion, the publicly listed iFLYTEK Co., Ltd. signed a Memorandum of Understanding (MOU) with XNode on Tuesday, 26 October 2021 at Pan Pacific Singapore during the iFLYTEK 1024 Global Developer Festival, organised by XNode Singapore and supported by the Singapore Deep-Tech Alliance. The MOU details XNodes support to facilitate iFLYTEKs entry and expansion into South-East Asia markets.

This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20211103005582/en/

MOU Signing between XNode and iFLYTEK representatives. Left: Ms Clara Chen, GM of XNode Singapore; Right: Ms Zhen Zhen from iFLYTEK. (Photo: Business Wire)

According to Clara Chen, General Manager of XNode Singapore, "Having built global companies from this city state for the last seventeen years has taught me a few things:

Because startups are launched here to be global from the get-go; Singaporeans instinctively cater to varying economic developments and diverse cultures in building companies,

Localisation of the same product offerings, be it in China, India or Indonesia, is not just about translation, but about context, culture, brand perception, and geopolitical and market realities, and

Trying to conquer ASEAN alone is harder than trying to expand into large homogenous economies like China or the US".

"On the other hand, China's tech giants have a huge domestic market and the drive to put themselves on an international stage. In this regard, Singapores Internationalisation know-how could hugely benefit innovative Chinese companies and open new frontiers together with them."

iFLYTEKs visions of enabling machines to listen and speak, and understand and think is all to create a better world with artificial intelligence. The company creates value by easing the burden on teachers and students in schools through teaching students according to their aptitudes. And in healthcare, by providing better, faster responses to medical emergencies.

Story continues

The simulcast festival had earlier called for entries for its themed challenge, aptly titled "A.I. for Intelligent Lifestyle Challenge". More than 9,000 applications and over 700 delivered projects were received for this global challenge. The final Top 3 teams are then selected to pitch at the live forum, attended virtually by hundreds of iFLYTEK senior executives, venture capitalists, media, academia, and government representatives.

For the challenge, the Top 3 Teams are Tictag, led by Mr Kevin Quah; Gleematics, led by Ms Ada Lim; and TopView, led by Mr George Tharian. Judging the 3 Teams are Dr Pauline Tay, Executive Director, Head of Innovation Partnerships, Tech Connect SEA, UBS AG, Singapore; Mr Luuk Eliens, Founding Partner of Singapore Deep-Tech Alliance; and Ms Clara Chen, General Manager of XNode Singapore.

iFLYTEK demonstrated their proprietary real-time translation technology to event attendees at Hefei, China, during the live streaming of the Final Pitch and the "Smart A.I. Education" panel discussion.

The panel speakers for the "Smart A.I. Education" are Dr Andreas Deppeler, Adjunct Associate Professor and Deputy Director of the Centre on AI Technology for Humankind at NUS Business School, National University of Singapore; Mr Koo Sengmeng, Senior Deputy Director for AI Innovation at AI Singapore; and Dr James Ong, Founder and CEO of Origami and Adjunct Professor at Singapore University of Technology and Design. The panel was moderated by Mr Luuk Eliens, Founding Partner of Singapore Deep-Tech Alliance.1

The panelists explored the topic of the existing use of A.I. in the education industry and its impact on educators and learners; as well as the ethics, risk and governance surrounding around A.I..

The esteemed panel speakers concluded with the notion that the future of A.I is open source. And that this future is already here.

High Resolution Photos are available on this Google Drive: [LINK]

Annex 1 - Panel Speakers Profiles

Dr Andreas Deppeler is an Adjunct Associate Professor and Deputy Director of the Centre on AI Technology for Humankind at NUS Business School, National University of Singapore. He teaches courses on technology, innovation, data value and digital strategy. His research focuses on the economic and societal implications of artificial intelligence. He received a Ph.D. in Theoretical Physics from Rutgers University.

Mr Koo Sengmeng is the Senior Deputy Director for AI Innovation at AI Singapore where he leads the talent and certification programmes and initiatives. He contributes regularly to the technology ecosystem and holds official appointments in PDPC AI Governance Roundtable, IEEE AI Standards Committee and ISO SC42. He co-founded AI Professionals Association in 2020 and holds advisory positions in Singapore Computer Society, Serious Games Association and Chulalongkorn University Technology Center.

Dr James Ong is an entrepreneur and community builder who has incubated, invested and 3. China and ASEAN. He is the founder and CEO of Origami that provides strategy, technology and investment advisory services for venturing towards Autonomous Enterprise, founded Artificial Intelligence International Institute (AIII), a think tank advocating Sustainable AI for Humanity and also adjunct professor at SUTD.

Moderator - Mr Luuk Eliens, Founding Partner of Singapore Deep-Tech Alliance. Luuk started his first business at the age of seventeen and has been an entrepreneur ever since. To date, Luuk founded three businesses in the fields of energy monitoring, education and software quality. As a business leader and entrepreneur with a demonstrated track-record in innovation and technology across multiple industries and continents, Luuk has vast experience with innovation from inception to product launch and has guided hundreds of startups and corporate clients to growth and investment.

About iFLYTEK

Founded in 1999, iFLYTEK is a well-known intelligent speech and artificial intelligence publicly listed company in the Asia-Pacific Region. Since its establishment, the company is devoted to cornerstone technological research in speech and languages, natural language understanding, machine learning, machine reasoning, adaptive learning, and has maintained the world-leading position in those domains. The company actively promotes the development of A.I. products and their sector-based applications, with visions of enabling machines to listen and speak, understand and think, creating a better world with artificial intelligence. In 2008, iFLYTEK went public on the Shenzhen Stock Exchange (stock code 002230).

For more information, please visit https://www.iflytek.com/

About XNode

XNode is Enterprise Singapores Global Innovation Alliance (GIA) partner for China to help Singapore technology startups and SMEs set up, test-bed and commercialise their solutions, or co-innovate with partners in Shanghai and Shenzhen through a series of highly-customised programmes and activities that will grant them access to the Chinese market, including potential investors, partners, customers and talent resources.

Connect with us on Website | LinkedIn | Facebook

About Singapore Deep-Tech Alliance

Singapore Deep-Tech Alliance (SDTA) is an impact-driven deep-tech venture builder that brings together entrepreneurs and technical talents to take advanced technologies from lab to market in 9 months. The Alliance's mission is to reduce the environmental impact of businesses by empowering founders to rapidly build, validate and scale Industry 4.0 startups and supporting them with world-class technologies, investment, network and skills. A public-private partnership between XNode, A*Star, and NHIC, SDTA Partners include corporations such as OMRON, Micron, TV SD, Sunningdale Tech Ltd, and PlanetSpark.

Connect with us on Website | LinkedIn | YouTube

1Annex 1 Panel Speakers Profiles

View source version on businesswire.com: https://www.businesswire.com/news/home/20211103005582/en/

Contacts

For media enquiries:

Ms Clara ChenGeneral ManagerXNode SingaporeE: Clara.Chen@theXNode.sg M: +65 9437 1808

Mr Jeffery WangDirector of South Pacific Region, International Cooperation DivisioniFLYTEK Co., Ltd.E: hrwang3@iflytek.com M: +86-186-559-591-00

Go here to see the original:
Renowned Intelligent Speech and Artificial Intelligence Public Listed Company, iFLYTEK Enters a Memorandum of Understanding With Enterprise Singapores...

Dask-ML dask-ml 1.8.1 documentation

Dask-ML provides scalable machine learning in Python using Dask alongsidepopular machine learning libraries like Scikit-Learn, XGBoost, and others.

People may run into scaling challenges along a couple dimensions, and Dask-MLoffers tools for addressing each.

The first kind of scaling challenge comes when from your models growing solarge or complex that it affects your workflow (shown along the vertical axisabove). Under this scaling challenge tasks like model training, prediction, orevaluation steps will (eventually) complete, they just take too long. Youvebecome compute bound.

To address these challenges youd continue to use the collections you know andlove (like the NumPy ndarray, pandas DataFrame, or XGBoost DMatrix)and use a Dask Cluster to parallelize the workload on many machines. Theparallelization can occur through one of our integrations (like Dasksjoblib backend to parallelize Scikit-Learn directly) or one ofDask-MLs estimators (like our hyper-parameter optimizers).

The second type of scaling challenge people face is when their datasets growlarger than RAM (shown along the horizontal axis above). Under this scalingchallenge, even loading the data into NumPy or pandas becomes impossible.

To address these challenges, youd use Dasks one of Dasks high-levelcollections like(Dask Array, Dask DataFrame or Dask Bag) combined with one of Dask-MLsestimators that are designed to work with Dask collections. For example youmight use Dask Array and one of our preprocessing estimators indask_ml.preprocessing, or one of our ensemble methods indask_ml.ensemble.

Its worth emphasizing that not everyone needs scalable machine learning. Toolslike sampling can be effective. Always plot your learning curve.

In all cases Dask-ML endeavors to provide a single unified interface around thefamiliar NumPy, Pandas, and Scikit-Learn APIs. Users familiar withScikit-Learn should feel at home with Dask-ML.

Other machine learning libraries like XGBoost already havedistributed solutions that work quite well. Dask-ML makes no attempt tore-implement these systems. Instead, Dask-ML makes it easy to use normal Daskworkflows to prepare and set up data, then it deploys XGBoostalongside Dask, and hands the data over.

See Dask-ML + XGBoost for more information.

Excerpt from:
Dask-ML dask-ml 1.8.1 documentation