The Future of AI and CX in Today’s COVID-19 World – AiThority

The global coronavirus pandemic is dramatically changing our world, including the landscape of customer experience (CX) much faster than the marketing and media industries could have anticipated.

With people at home, brick-and-mortar businesses have to quickly adopt new digital strategies to provide their customers with what they need right now. In order to deliver on customer expectations, the best brands have strategies that continuously develop relationships through a series of thoughtful interactions, resulting in an increasingly hyper-personalized experience across the customer journey, which is usually backed by artificial intelligence (AI).

Companies who are already using AI with their CX efforts need to adjust their strategies to our worlds collective new normal. Customers experiences are underscored by anxiety, concern, stress, and confusion, and todays AI must be emotionally intelligent. With this new and ever-changing landscape in mind, the following areas are what marketing and customer experience leaders must do to shift accordingly.

Read Also:5G, AI And IoT : IBM And Verizon Business Close To Edge Of Virtually Mobile

Hyper-personalization is the CX term for it, but the root value is actually empathy. Human beings want to feel known its about trust and comfort (especially at a time like this). Businesses can (and should) make their customers feel known and valued with digital experiences. AI makes this possible across huge swaths of customers in a digital landscape.

Personalization tactics have grown well beyond simply using someones name or location in an email campaign.

By continuously developing a healthy mix of both profile data (name, age, preferences, etc.) and behavioral data (what the customer does at your various touchpoints), companies can send timely, personalized communication or create unique experiences that are specific and helpful to each customer.

A great example of a company collecting data to empower hyper-personalization is Spotify. The streaming music app used by millions regularly looks at data to automate song suggestions and create daily or weekly playlists. While other streaming services pair song suggestions based on your listening preferences, few are actually predicting that you will or wont like a new album (at least not with the success rate I find on Spotify).

Spotify also suggests playlists based on world events and situations that users are likely facing humanizing the experience. For example, the company released a COVID-19 quarantine playlist for those needing some upbeat music (or meditation, study music, etc.) in their lives. Spotifys ability to deliver on that experience and then to continually nurture a relationship with their customers is based entirely on their progressive use of data and AI.

Being able to collect, decode and leverage complex data sets is essential for meeting CX demands during this quarantine period.

Since personalization is core to a dynamic CX, companies need to consider new and interesting ways to connect the data they have and to continually refine CX profiles for accuracy. Customers data should be drawn from and influenced across the customer journey: from marketing and sales and customer retention to product management and customer support. The entire digital ecosystem of data should be a collaborative touchpoint between product development, marketing, and support.

Trust is an essential component of CX, particularly right now during this time of uncertainty. While customers are no doubt becoming more and more comfortable with the benefits of personalization, they get turned off if they think a company isnt being responsible for their data. Building AI solutions that allow users to progressively provide information in exchange for real value is paramount.

While the promise of AI around automation and personalization is exciting, the narrative a company builds around AI and CX strategies need to align closely with customer needs and expectations. Customers want and expect hyper-personalization already; they just dont want to think about what it took for a company to get there. Given our reliance on the digital world in our new reality, its more important than ever that companies are transparent and good stewards of your data.

AI also needs to be able to adapt to unprecedented circumstances and override some personalization settings in case of a crisis. Specifically, CX needs to include awareness of potential news events so that customers arent being served with distressing or inappropriate ads.

For example, takeout and delivery apps like GrubHub and Postmates have pop-up notifications about COVID-19, which also remind users about the impact this pandemic has on the entire restaurant industry (i.e., your order might take longer than usual due to staff shortages or certain restaurants that are not open, might not be accurately reflected in the app).

The old fashioned face-to-face, human-to-human customer service experience cant be replicated across millions of online customers. But in times like this, if companies want to grow and set themselves apart from others, AI needs to be used primarily as a tool for automating and analyzing customer data collection so the CX can be relevant and emotionally aware to todays ever-changing landscape.

This marriage of AI and CX will help companies develop a strategy for leveraging hyper-personalized data to give their customers what they truly want and need.

Share and Enjoy !

Original post:

The Future of AI and CX in Today's COVID-19 World - AiThority

Aisera, an AI tool to help with customer service and internal operations, exits stealth with $50M – TechCrunch

Robotic process automation the ability to automate certain repetitive software-based tasks to free up people to focus on work that computers cannot do has become a major growth area in the world of IT. Today, a startup called Aisera that is coming out of stealth has taken this idea and supercharged it by using artificial intelligence to help not just workers with internal tasks, but in customer-facing environments, too.

Quietly operating under the radar since 2017, Aisera has picked up a significant list of customers, including Autodesk, Ciena, Unisys and McAfee covering a range of use cases from computer geeks with very complicated questions through to people who didnt grow up in the computer generation, says CEO Muddu Sudhakar, the serial entrepreneur (three previous startups, Kazeon, Cetas and Caspida, were respectively acquired by EMC, VMware and Splunk) who is Aiseras co-founder.

With growth of 350% year-on-year, the company is also announcing today that it has raised $50 million to date, including most recently a $20 million Series B led by NorwestVenture Partners with Menlo Ventures, True Ventures, Khosla Ventures, First Round Capital, Ram Shriram and Maynard Webb Investments also participating.

(No valuation is being disclosed, said Sudhakar.)

The crux of the problem that Aisera has set out to solve is that, while RPA has identified that there is a degree of repetition in certain back-office tasks which, if that work can be automated, can reduce operational costs and be more efficient for an organization the same can be said for a wide array of IT processes that cover sales, HR, customer care and more.

There have been some efforts made to apply AI to solving different aspects of these particular use cases, but one of the issues has been that there are few solutions that sit above an organizations software stack to work across everything that the organization uses, and does so in an unsupervised way that is, uses AI to learn processes without having an army of engineers alongside the program training it.

Aisera aims to be that platform, integrating with the most popular software packages (for example in service desk apps, it integrates with Salesforce, ServiceNow, Atlassian and BMC), providing tools to automatically resolve queries and complete tasks. Aisera is looking to add more categories as it grows: Sudhakar mentioned legal, finance and facilities management as three other areas its planning to target.

Matt Howard, the partner at Norwest that led its investment in Aisera, said one of the other things that stands out for him about the company is that its tools work across multiple channels, including email, voice-based calls and messaging, and can operate at scale, something that cant be said in actual fact for a lot of AI implementations.

I think a lot of companies have overstated when they implement machine learning. A lot of times its actually big data and predictive analytics. We have mislabeled a lot of this, he said in an interview. AI as a rule is hard to maintain if its unsupervised. It can work every well in a narrow use case, but it becomes a management nightmare when handling the stressthat comes with 15 million or 20 million queries. Currently Aisera said that it handles about 10 million people on its platform. With this round, Howard andJon Callaghan of True Ventures are both joining the board.

There is always a paradox of sorts in the world of AI, and in particular as it sits around and behind processes that have previously been done by humans. It is that AI-based assistants, as they get better, run the risk of ultimately making obsolete the workers theyre meant to help.

While that might be a long-term question that we will have to address as a society, for now, the reward/risk balance seems to tip more in the favour of reward for Aiseras customers. At Ciena, we want our employees to be productive, said Craig Williams, CIO at Ciena, in a statement. This means they shouldnt be trying to figure out how a ticketing tool works, nor should they be waiting around for a tech to fix their issues. We believe that 75 percent of all incidents can be resolved through Aiseras technology, and we believe we can apply Aisera across multiple platforms. Aisera doesnt just make great AI technology, they understand our problems and partner with us closely to achieve our mission.

And Sudhakar similar to the founders of startups that are would-be competitors like UiPath when asked the same kind of question doesnt feel that obsolescence is the end game, either.

There are billions of people in call centres today, he said in an interview. If I can automate [repetitive] functions they can focus on higher-level work, and thats what we wanted to do. Those trying to solve simple requests shouldnt. Its one example where AI can be put to good use. Help desk employees want to work and become programmers, they dont want to do mundane tasks. They want to move up in their careers, and this can help give them the roadmap to do it.

See the article here:

Aisera, an AI tool to help with customer service and internal operations, exits stealth with $50M - TechCrunch

AI file extension – Open, view and convert .ai files

The ai file extension is associated with Adobe Illustrator the well known vector graphics editor for the Macintosh and Windows platforms.

AI file format is a widely used format for the exchange of 2D objects. Basic files in this format are simple to write, but files created by applications implementing the full AI specification can be quite large and complex and may be too slow to render.

Simple .ai files are easy to construct, and a program can create files that can be read by any AI reader or can be printed on any PostScript printer software. Reading AI files is another matter entirely. Certain operations may be very difficult for a rendering application to implement or simulate. In light of this, developers often choose not to render the image from the PostScript-subset line data in the file. However almost all of the image can usually be reconstructed using simple operations.implementation of the PostScript language.

The ai files consist of a series of ASCII lines, which may be comments, data, commands, or combinations of commands and data. This data is based on the PDF language specification and older versions of Adobe Illustrator used format which is variant of Adobe Encapsulated PostScirpt (EPS) format.

If The EPS is a slightly limited subset of full PostScript, then Adobe Illustrator AI format is a strictly limited, highly simplified subset of EPS. While EPS can contain virtually any PS command that's not on the verboten list and can include elaborate program flow logic that determines what gets printed when, an AI file is limited to a much smaller number of drawing commands and it contains no programming logic at all. For all practical purposes, each unit of "code" in an AI file represents a drawing object. The program importing the AI reads each object in sequence, start to finish, no detours, no logical side-trips.

MIME types: application/postscript

Here is the original post:

AI file extension - Open, view and convert .ai files

Metro Bank and Sensibill partner on AI money management | Technology & AI – FinTech Magazine – The FinTech & InsurTech Platform

UK-based Metro Bank has announced details of its collaboration with Canadian tech firm Sensibill to provide business customers with enhanced AI tools.

Specifically, new features like receipt management capabilities will be added to Metro Banks app, providing SMBs with a simple but powerful method of capturing and storing records of their transactions.

For users the process is simple: photographs of receipts are taken with a devices in-built camera and then AI (artificial intelligence) and ML (machine learning) software are used to auto-populate the users transaction history, including VAT.

With UK SMBs projected to lose up to 15 cumulative days per year while trying to balance company expenditure records (two hours each week), the utility of an easy, automated solution for businesses is clear.

Were thrilled to partner with Sensibill to provide our business customers with essential money management tools, easily accessible from our mobile app. These will empower SMBs to free up time in a way that wasnt possible before, to spend running and growing their businesses, said David Thomasson, Chief Commercial Officer at Metro Bank.

So many small businesses are facing uncertainty because of coronavirus. We want to keep delivering new tools for our customers that can make managing their money a little easier.

Helping customers build efficiency

The rollout of Sensibill comes following a successful trial period in 2019. Sensibill is dedicated to improving customer engagement and creating mutual understanding between them and the financial services institutions that serve them.

Winner of the Best Mobile Banking Innovation award at last years Financial Innovation Awards, the companys collaboration with Metro Bank signifies its potential for expanding even further across the UK finance market.

Regarding the announcement that Metro Bank considered the trial period successful, Corey Gross, co-founder and CEO, commented, [It] understands that small businesses and gig workers need a better, simpler way to track their finances and manage expenses.

By leveraging our solution, the banks small businesses can regain hours once lost to analysing paper receipts and run their businesses more effectively, which is especially critical in light of the pandemic.

This partnership reflects Metro Banks deep dedication to providing advanced technology and support to help the people they serve succeed financially, both now and in the future.

The upgraded app is currently only available on iOS. However, Metro Bank assures Android users that its new features will be made available to them in the coming weeks.

The rest is here:

Metro Bank and Sensibill partner on AI money management | Technology & AI - FinTech Magazine - The FinTech & InsurTech Platform

Engadget is testing all the major AI assistants – Engadget

Hardly a day goes by that we don't cover virtual assistants. If it's not news about Siri, there's some new development with Alexa, or Cortana or Google Assistant. Perhaps a new player, like Samsung, is wading into the space. Even Android creator Andy Rubin is considering building an assistant of his own. And his company probably isn't the only one that thinks there's room for another AI helper.

With virtual assistants becoming such an integral part of our lives (or at least our tech-news diets), we felt it was time to stop and take stock of everything that's happening here. For one week, we asked five Engadget reporters to live with one of the major assistants: Apple's Siri, Amazon's Alexa, the Google Assistant, Microsoft's Cortana and Samsung's Bixby. What you'll see on Engadget throughout the week aren't reviews, per se, nor did we endeavor to crown the "best" digital assistant. Not only is that a subjective question but, as it turns out, none of the assistants are as smart or reliable as we'd like.

In the absence of a winner, then, what we have is a state of the union: a picture of where AI helpers stand and where they're headed. Follow our series here. And, at the rate each of these assistants is maturing, don't be surprised if we revisit them sooner than later.

This week Engadget is examining each of the five major virtual assistants, taking stock of how far they've come and how far they still have to go. Find all our coverage here.

See the original post here:

Engadget is testing all the major AI assistants - Engadget

Indian, German engineers working on an AI-powered brain what does it mean for social (dis)order? – YourStory.com

Rolf Bulander, Chairman for Bosch Mobility, says societies must decide how artificial intelligence will be implemented in their cultures, especially because in 20 years time, 41 megacities will be home to 6 billion people.

Nearly 3,000 engineers from Bosch in both Stuttgart and Bengaluru have one thing in common: they are putting their brains figuratively speaking into a super brain. This brain can crunch 30 trillion data points per second and will process data three times faster than a human brain can. This brain powered by artificial intelligence has no reason to feel guilty about anything about daily life because it is designed tonot make mistakes. It is what Yuval Noah Harari predicted would be the next phase of evolution of homo sapiens being connected to all things around us. While the engineers are not going as far as putting little microchips in our brain yet, the AI-powered brain will start off in our cars and help protect our environment, along with offering us safety and stress-free driving.

The engineers from Bosch, together with Daimler, formed an alliance to put self-driving cars on the roads this year.

From automotive cloud suite to e-scooters, software connects people from home to work and helps them discover experiences around you, says Rolf Bulander, Chairman of the Mobility Services at Robert Bosch GmbH. He adds that the car will be the third living experience and will have gesture and voice control: The objective is to save lives because 90 percent of accidents are caused by human error and artificial intelligence (AI) will reduce this in automated and driverless cars.

Leaders at Bosch add that although they are likely to work with startups, there arent many who have made significant advances in R&D in AI. We are making our own investments in AI because of the capabilities we have built over time. I also see the startup market heating up but we have not seen many advances in AI from startups; there are very few of them out there, says Dirk Hoheisel, member of the board of management at Robert Bosch GmbH. The world, he believes, is becoming more collaborative thanks to AI.

There are fundamental questions to answer about the co-existence of human intelligence and artificial intelligence. With India, the story of AI one of opportunity and chaos. Can we mix the human quotient with AI and create a sustainable livelihood? AI makes living better, but many societies are not ready for that change. The fundamental question to ask is how society or different cultures will use AI. And they must decide for themselves the future of AI in their (respective) regions, explains Rolf.

The question about how different cultures will use AI is one of the most important questions for mankind, and Rolf points to Germany as an example: the German government has appointed a judge of the constitutional court to head an ethics committee on AI in cars. The aim is to find the answer to a fundamental question: how does the car decide whose life it has to save? Based on the age of the of the passengers, does it protect the younger person over the older one? The one who has a better chance of survival after an accident or the one who needs critical medical attention?

The future, though, cannot be about stopping technology. Companies like Bosch and others will push boundaries to make humans to reinvent themselves in relation to what technology can do.

Excerpt from:

Indian, German engineers working on an AI-powered brain what does it mean for social (dis)order? - YourStory.com

More than half of Europeans want to replace lawmakers with AI, study says – CNBC

People walking at Strandvagen in Stockholm.

JONATHAN NACKSTRAND

LONDON A study has found that most Europeans would like to see some of their members of parliament replaced by algorithms.

Researchers at IE University's Center for the Governance of Change asked 2,769 people from 11 countries worldwide how they would feel about reducing the number of national parliamentarians in their country and giving those seats to an AI that would have access to their data.

The results, published Thursday, showed that despite AI's clear and obvious limitations, 51% of Europeans said they were in favor of such a move.

Oscar Jonsson, academic director at IE University's Center for the Governance of Change and one of the report's main researchers, told CNBC that there's been a "decades long decline of belief in democracy as a form of governance."

The reasons are likely linked to increased political polarization, filter bubbles and information splintering, he said. "Everyone's perception is that that politics is getting worse and obviously politicians are being blamed so I think it (the report) captures the general zeitgeist," Jonsson said. He added that the results aren't that surprising "given how many people know their MP, how many people have a relationship with their MP (and) how many people know what their MP is doing."

The study found the idea was particularly popular in Spain, where 66% of people surveyed supported it. Elsewhere, 59% of the respondents in Italy were in favor and 56% of people in Estonia.

Not all countries like the idea of handing over control to machines, which can be hacked or act in ways that humans don't want them to. In the U.K., 69% of people surveyed were against the idea, while 56% were against it in the Netherlands and 54% in Germany.

Outside Europe, some 75% of people surveyed in China supported the idea of replacing parliamentarians with AI, while 60% of American respondents opposed it.

Opinions also vary dramatically by generation, with younger people found to be significantly more open to the idea. Over 60% of Europeans aged 25-34 and 56% of those aged 34-44 were in support of the idea, whereas a majority of respondents above 55-years-old don't see it as a good idea.

Read the original:

More than half of Europeans want to replace lawmakers with AI, study says - CNBC

AI and the coronavirus fight: How artificial intelligence is taking on COVID-19 – ZDNet

As the COVID-19 coronavirus outbreak continues to spread across the globe, companies and researchers are looking to use artificial intelligence as a way of addressing the challenges of the virus. Here are just some of the projects using AI to address the coronavirus outbreak.

Using AI to find drugs that target the virus

A number of research projects are using AI to identify drugs that were developed to fight other diseases but which could now be repurposed to take on coronavirus. By studying the molecular setup of existing drugs with AI, companies want to identify which ones might disrupt the way COVID-19 works.

BenevolentAI, a London-based drug-discovery company, began turning its attentions towards the coronavirus problem in late January. The company's AI-powered knowledge graph can digest large volumes of scientific literature and biomedical research to find links between the genetic and biological properties of diseases and the composition and action of drugs.

EE: How to implement AI and machine learning (ZDNet special report) | Download the report as a PDF (TechRepublic)

The company had previously been focused on chronic disease, rather than infections, but was able to retool the system to work on COVID-19 by feeding it the latest research on the virus. "Because of the amount of data that's being produced about COVID-19 and the capabilities we have in being able to machine-read large amounts of documents at scale, we were able to adapt [the knowledge graph] so to take into account the kinds of concepts that are more important in biology, as well as the latest information about COVID-19 itself," says Olly Oechsle, lead software engineer at BenevolentAI.

While a large body of biomedical research has built up around chronic diseases over decades, COVID-19 only has a few months' worth of studies attached to it. But researchers can use the information that they have to track down other viruses with similar elements, see how they function, and then work out which drugs could be used to inhibit the virus.

"The infection process of COVID-19 was identified relatively early on. It was found that the virus binds to a particular protein on the surface of cells called ACE2. And what we could with do with our knowledge graph is to look at the processes surrounding that entry of the virus and its replication, rather than anything specific in COVID-19 itself. That allows us to look back a lot more at the literature that concerns different coronaviruses, including SARS, etc. and all of the kinds of biology that goes on in that process of viruses being taken in cells," Oechsle says.

The system suggested a number of compounds that could potentially have an effect on COVID-19 including, most promisingly, a drug called Baricitinib. The drug is already licensed to treat rheumatoid arthritis. The properties of Baricitinib mean that it could potentially slow down the process of the virus being taken up into cells and reduce its ability to infect lung cells. More research and human trials will be needed to see whether the drug has the effects AI predicts.

Shedding light on the structure of COVID-19

DeepMind, the AI arm of Google's parent company Alphabet, is using data on genomes to predict organisms' protein structure, potentially shedding light on which drugs could work against COVID-19.

DeepMind has released a deep-learning library calledAlphaFold, which uses neural networks to predict how the proteins that make up an organism curve or crinkle, based on their genome. Protein structures determine the shape of receptors in an organism's cells. Once you know what shape the receptor is, it becomes possible to work out which drugs could bind to them and disrupt vital processes within the cells: in the case of COVID-19, disrupting how it binds to human cells or slowing the rate it reproduces, for example.

Aftertraining up AlphaFold on large genomic datasets, which demonstrate the links between an organism's genome and how its proteins are shaped, DeepMind set AlphaFold to work on COVID-19's genome.

"We emphasise that these structure predictions have not been experimentally verified, but hope they may contribute to the scientific community's interrogation of how the virus functions, and serve as a hypothesis generation platform for future experimental work in developing therapeutics," DeepMind said. Or, to put it another way, DeepMind hasn't tested out AlphaFold's predictions outside of a computer, but it's putting the results out there in case researchers can use them to develop treatments for COVID-19.

Detecting the outbreak and spread of new diseases

Artificial-intelligence systems were thought to be among the first to detect that the coronavirus outbreak, back when it was still localised to the Chinese city of Wuhan, could become a full-on global pandemic.

It's thought that AI-driven HealthMap, which is affiliated with the Boston Children's Hospital,picked up the growing clusterof unexplained pneumonia cases shortly before human researchers, although it only ranked the outbreak's seriousness as 'medium'.

"We identified the earliest signs of the outbreak by mining in Chinese language and local news media -- WeChat, Weibo -- to highlight the fact that you could use these tools to basically uncover what's happening in a population," John Brownstein, professor of Harvard Medical School and chief innovation officer at Boston Children's Hospital, told the Stanford Institute for Human-Centered Artificial Intelligence's COVID-19 and AI virtual conference.

Human epidemiologists at ProMed, an infectious-disease-reporting group, published their own alert just half an hour after HealthMap, and Brownstein also acknowledged the importance of human virologists in studying the spread of the outbreak.

"What we quickly realised was that as much it's easy to scrape the web to create a really detailed line list of cases around the world, you need an army of people, it can't just be done through machine learning and webscraping," he said. HealthMap also drew on the expertise of researchers from universities across the world, using "official and unofficial sources" to feed into theline list.

The data generated by HealthMap has been made public, to be combed through by scientists and researchers looking for links between the disease and certain populations, as well as containment measures. The data has already been combined with data on human movements, gleaned from Baidu,to see how population mobility and control measuresaffected the spread of the virus in China.

HealthMap has continued to track the spread of coronavirus throughout the outbreak, visualising itsspread across the world by time and location.

Spotting signs of a COVID-19 infection in medical images

Canadian startup DarwinAI has developed a neural network that can screen X-rays for signs of COVID-19 infection. While using swabs from patients is the default for testing for coronavirus, analysing chest X-rays could offer an alternative to hospitals that don't have enough staff or testing kits to process all their patients quickly.

DarwinAI released COVID-Net as an open-source system, and "the response has just been overwhelming", says DarwinAI CEO Sheldon Fernandez. More datasets of X-rays were contributed to train the system, which has now learnt from over 17,000 images, while researchers from Indonesia, Turkey, India and other countries are all now working on COVID-19. "Once you put it out there, you have 100 eyes on it very quickly, and they'll very quickly give you some low-hanging fruit on ways to make it better," Fernandez said.

The company is now working on turning COVID-Net from a technical implementation to a system that can be used by healthcare workers. It's also now developing a neural network for risk-stratifying patients that have contracted COVID-19 as a way of separating those with the virus who might be better suited to recovering at home in self-isolation, and those who would be better coming into hospital.

Monitoring how the virus and lockdown is affecting mental health

Johannes Eichstaedt, assistant professor in Stanford University's department of psychology, has been examining Twitter posts to estimate how COVID-19, and the changes that it's brought to the way we live our lives, is affecting our mental health.

Using AI-driven text analysis, Eichstaedt queried over two million tweets hashtagged with COVID-related terms during February and March, and combined it with other datasets on relevant factors including the number of cases, deaths, demographics and more, to illuminate the virus' effects on mental health.

The analysis showed that much of the COVID-19-related chat in urban areas was centred on adapting to living with, and preventing the spread of, the infection. Rural areas discussed adapting far less, which the psychologist attributed to the relative prevalence of the disease in urban areas compared to rural, meaning those in the country have had less exposure to the disease and its consequences.

SEE:Coronavirus: Business and technology in a pandemic

There are also differences in how the young and old are discussing COVID-19. "In older counties across the US, there's talk about Trump and the economic impact, whereas in young counties, it's much more problem-focused coping; the one language cluster that stand out there is that in counties that are younger, people talk about washing their hands," Eichstaedt said.

"We really need to measure the wellbeing impact of COVID-19, and we very quickly need to think about scalable mental healthcare and now is the time to mobilise resources to make that happen," Eichstaedt told the Stanford virtual conference.

Forecasting how coronavirus cases and deaths will spread across cities and why

Google-owned machine-learning community Kaggle is setting a number of COVID-19-related challenges to its members, includingforecasting the number of cases and fatalities by cityas a way of identifying exactly why some places are hit worse than others.

"The goal here isn't to build another epidemiological model there are lots of good epidemiological models out there. Actually, the reason we have launched this challenge is to encourage our community to play with the data and try and pick apart the factors that are driving difference in transmission rates across cities," Kaggle's CEO Anthony Goldbloom told the Stanford conference.

Currently, the community is working on a dataset of infections in 163 countries from two months of this year to develop models and interrogate the data for factors that predict spread.

Most of the community's models have been producing feature-importance plots to show which elements may be contributing to the differences in cases and fatalities. So far, said Goldbloom, latitude and longitude are showing up as having a bearing on COVID-19 spread. The next generation of machine-learning-driven feature-importance plots will tease out the real reasons for geographical variances.

"It's not the country that is the reason that transmission rates are different in different countries; rather, it's the policies in that country, or it's the cultural norms around hugging and kissing, or it's the temperature. We expect that as people iterate on their models, they'll bring in more granular datasets and we'll start to see these variable-importance plots becoming much more interesting and starting to pick apart the most important factors driving differences in transmission rates across different cities. This is one to watch," Goldbloom added.

Read the original here:

AI and the coronavirus fight: How artificial intelligence is taking on COVID-19 - ZDNet

Pentagon AI center shifts focus to joint war-fighting operations – C4ISRNet

The Pentagons artificial intelligence hub is shifting its focus to enabling joint war-fighting operations, developing artificial intelligence tools that will be integrated into the Department of Defenses Joint All-Domain Command and Control efforts.

As we have matured, we are now devoting special focus on our joint war-fighting operation and its mission initiative, which is focused on the priorities of the National Defense Strategy and its goal of preserving Americas military and technological advantages over our strategic competitors, Nand Mulchandani, acting director of the Joint Artificial Intelligence Center, told reporters July 8. The AI capabilities JAIC is developing as part of the joint war-fighting operations mission initiative will use mature AI technology to create a decisive advantage for the American war fighter.

That marks a significant change from where JAIC stood more than a year ago, when the organization was still being stood up with a focus on using AI for efforts like predictive maintenance. That transformation appears to be driven by the DoDs focus on developing JADC2, a system of systems approach that will connect sensors to shooters in near-real time.

JADC2 is not a single product. It is a collection of platforms that get stitched together woven together into effectively a platform. And JAIC is spending a lot of time and resources focused on building the AI component on top of JADC2, said the acting director.

According to Mulchandani, the fiscal 2020 spending on the joint war-fighting operations initiative is greater than JAIC spending on all other mission initiatives combined. In May, the organization awarded Booz Allen Hamilton a five-year, $800 million task order to support the joint war-fighting operations initiative. As Mulchandani acknowledged to reporters, that task order exceeds JAICs budget for the next few years and it will not be spending all of that money.

One example of the organizations joint war-fighting work is the fire support cognitive system, an effort JAIC was pursuing in partnership with the Marine Corps Warfighting Lab and the U.S. Armys Program Executive Office Command, Control and Communications-Tactical. That system, Mulchandani said, will manage and triage all incoming communications in support of JADC2.

Mulchandani added that JAIC was about to begin testing its new flagship joint war-fighting project, which he did not identify by name.

Sign up for the C4ISRNET newsletter about future battlefield technologies.

(please select a country) United States United Kingdom Afghanistan Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, The Democratic Republic of The Cook Islands Costa Rica Cote D'ivoire Croatia Cuba Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guinea Guinea-bissau Guyana Haiti Heard Island and Mcdonald Islands Holy See (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Israel Italy Jamaica Japan Jordan Kazakhstan Kenya Kiribati Korea, Democratic People's Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, The Former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic of Monaco Mongolia Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands Netherlands Antilles New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestinian Territory, Occupied Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Reunion Romania Russian Federation Rwanda Saint Helena Saint Kitts and Nevis Saint Lucia Saint Pierre and Miquelon Saint Vincent and The Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia and Montenegro Seychelles Sierra Leone Singapore Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and The South Sandwich Islands Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand Timor-leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States United States Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela Viet Nam Virgin Islands, British Virgin Islands, U.S. Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe

Subscribe

By giving us your email, you are opting in to the C4ISRNET Daily Brief.

We do have a project going on under joint war fighting which we are going to be actually go into testing, he said. They are very tactical edge AI is the way Id describe it. That work is going to be tested. Its actually promising work were very excited about it.

As I talked about the pivot from predictive maintenance and others to joint war fighting, that is probably the flagship project that were sort of thinking about and talking about that will go out there, he added.

While left unnamed, the acting director assured reporters that the project would involve human operators and full human control.

We believe that the current crop of AI systems today [...] are going to be cognitive assistance, he said. Those types of information overload cleanup are the types of products that were actually going to be investing in.

Cognitive assistance, JADC2, command and controlthese are all pieces, he added.

See more here:

Pentagon AI center shifts focus to joint war-fighting operations - C4ISRNet

The Transformational Role of AI in Finance – PaymentsJournal

The subject headline in this Finextra piece is highlighting an overview of some categorical use case scenarios where capabilities residing under the AI umbrella are having an impact on the delivery of financial services. Members of our commercial and separate emerging tech advisory services will have the benefit of deeper dives into some specific uses across retail and corporate banking:

More than 60 years have passed since artificial intelligence was a daring concept at Dartmouth ollege which only got half of the requested funding. Right now, AI is a $9.5 billion industry, projected to reach $118.6 billion by 2025, according toStatistaDue to its immediate applications in streamlining processes, improving customer care, and managing risks, it has been widely adopted by the frontrunners of the financial industry. From NLP to replace front desk and call center employees to robots analyzing transactions and loans, there is a way to use machine learning in the banking and payment sector.

The author points to four categories of current and future impact:

Finance is a sector that is a rather late adopter of new technologies due to regulatory and compliance requirements; yet it is also one highly interested in cutting costs. This puts AI companies in the position of having a harder time to enter this market. However, this market offers potentially high payoffs once the tech goes mainstream.

Following the space is part of our extensive coverage of fintechs as applications apply across financial services.

Overview by Steve Murphy, Director, Commercial and Enterprise Payments Advisory Service at Mercator Advisory Group

Summary

Article Name

The Transformational Role of AI in Finance

Description

highlighting an overview of some categorical use case scenarios where capabilities residing under the AI umbella are having some impact on the delivery of financial services.

Author

Steve Murphy

Publisher Name

PaymentsJournal

Publisher Logo

Go here to read the rest:

The Transformational Role of AI in Finance - PaymentsJournal

The Ouroboros, From Antiquity to AI – Gizmodo

The Ouroboroswhich symbolizes the cyclical nature of life and death and the divine essence that lives on foreverwas first recorded in the Egyptian Book of the Netherworld. Alchemists then adopted the symbol into their mystical work of physical and spiritual transformation. After chemistry supplanted its more mystical forebear, alchemy, the Ouroboros was largely forgotten. That is, until reemerging in the 19th century largely thanks to the psychologists Carl Jung. Today, the Ouroboros has taken on a new life in techs Ouroboros program, and has become integral to coding and our evolving understanding of artificial intelligence.

As a medievalist, the transformation of the Ouroboros from an ancient Egyptian mystical symbol to that of artificial intelligence is endlessly fascinating to me. Why has this symbol been reimagined so many times through the centuries? In tech, Ouroboros programs, like their name would suggest, have no beginning input and no ultimate output. In other words, they begin without any coder starting them. Theyre continuous, coding and coding forever seemingly on their own. So how did a mysterious symbol of a snake make its way from antiquity into modern technology?

The word Ouroboros is from ancient Greek, and means tail-devouring. The Egyptian origins of the Ouroboros are a little murkier. One of the first known precursors to the Ouroboros is found in the ancient Egyptian religious and funerary text, the Amduat. The important, early 15th century funerary text tells a story of resurrection that echoes across Gnostic and early Christian texts as well as in alchemy. In the Amduat, the deceased pharaoh travels with the sun god Ra through the realm of the dead known to the Egyptians as Duat. Every day after the sun sets in the West, Ra must travel through Duat to the East where the sun rises with Ras reemergence. Its believed that when a pharaoh dies they too make this journey with Ra eventually becoming one with the sun god and living on forever. The Amduat served as a sort of road map for the dead pharaoh, instructing them on how to make this journey with Ra. Its why the Amduat is often found carved into the walls of the pharaohs tomb. Like any good road trip, you want to keep a map close when traveling through the afterworld. The twelve hours of the night act as markers in the Amduats map.

Its in the sixth hour that one of the most significant moments in the journey occursthe pharaoh is met by Mehen, a huge coiled serpent. Mehen helps guide Ra and the pharaoh through the afterworld coiling around Ra and the pharaoh on the journey to protect them from all outside evils and lurking enemies. Mehens body not only acts a physical barrier of protection encircling Ra, but also a magical one as Egyptologist Peter A Piccione points out. Mehen is often seen as a connector between the physical and metaphysical linking him to Egyptian magical traditions. His association with magic and the liminal space between the real and the unreal eventually brings Mehen into the folds of alchemy.

In less esoteric circles, Mehen is also an ancient Egyptian board game, where a carved coiled serpent acts as the board.

Its about two hundred years later, in the 13th century, that Mehen transforms into the single, continuous circle of the Ouroboros. The early Ouroboros depiction can be found in none other than in King Tuts burial chamber, gilded in gold. In fact, not one but two Ourobori encircle the relief of a mummified figure, identified by scholar Alexandre Piankoff as King Tutankhamun. One encircles his head, shown below, and another encircles his feet.

Scholars believe that the encircling serpent is still a representation of Mehen, and pharaoh Tutankhamuns journey through the afterworld with Ra. The significance comes though in how Mehen is drawn in King Tuts burial chamber. Rather than being a squiggly line surrounding the pharaoh in earlier reliefs, this is the first time Mehen is shown as the Ouroboros is depicted in later centuriesas one continuous circle.

Sometimes we forget that the ancient world was full of folks going and coming, exchanging knowledge and culture along the way. The Egyptians didnt exist in a bubble, and already by the 2nd millennium BCE scholars know Egyptians and Greeks were rubbing shoulders. (The Egyptians, at that time, were a far more advanced civilization when compared to the Greeks.) Mehen morphed into the Greek Ouroboros, and got imported East via the Egyptian practice of alchemy.

Alchemy brought together scholars from various corners of the globe. Greeks, Egyptians, Jews, and othersfrom the peninsula all flocked to the Egyptian city of Alexandria to study the art of alchemy. Alchemy, with its elaborate experiments and mystical underpinnings, was at the cutting edge of research in the ancient world. By the early centuries of the Common Era, Alexandria was the epicenter of not only alchemy, but of math, history, philosophy, medicine, and many other disciplines.

The earliest known alchemical depiction of the Ouroboros is found in the third century text, The Chrysopoeia of Cleopatra. Here the Ouroboros encircles the words all is one. By the time the alchemist Cleopatra, not to be confused with that other Cleopatra who killed herself with the snakes and had that whole thing with Mark Antony, drew this Ouroboros, the Ouroboros was no longer a depiction of Mehen. While related to its origin as Mehen, the Ouroboros by this point had morphed into an altogether new symbol. Both Mehen and the Ouroboros relate to the understanding of time being cyclical. Mehen encircles Ra through the gods journey through the afterworld every night. The alchemical Ouroboros however no longer carries the protective and magical powers associated with Mehen.

In alchemy, the Ouroboros represents not only the cyclical nature of time and energy, but also the union of opposites necessary to yield the Philosophers Stone. The Philosophers Stone is the ultimate goal many alchemists worked towards. The Stone had the power to transmute anything into its highest form. It could transform lead to gold. It was the universal solvent and the elixir of life. It was the answer to anything alchemists worked to achieve in their laboratories. In fact, the Ouroboros itself can be a representation of the Philosophers Stone. No wonder then that the Ouroboros is at the heart of ancient alchemical study.

Outside of the Western world, the Ouroboros pops up almost simultaneously across the ancient world. In Hinduism mythology, a never-ending snake wraps around the world to keep it upright. In 2nd century yogic text, divine energy known as Kundalini is described as coiled serpent holding her tail in her mouth. In China, the Ouroboros represents the union of yin and yang. Even across the globe, Aztecs depicted the snake god Quetzalcoatl biting its own tail on the base of the Pyramid of the Feathered Serpent.

In the West, the Ouroboros traveled from the ancient world to the Gnostic, Christian, then Islamic worlds, and then on to Medieval and Renaissance Europe. During this time, the Ouroboros symbol was remixed several times. The 3rd century CE Gnostic text Pistis Sophia describes the Ouroboros as a twelve-part dragon. Perhaps a nod to the twelve hours of night associated with Mehen. Gnostics considered the Ouroboros to be a symbol of the eternal, never-ending soul.

Medieval christians, on the other hand, sometimes associated the Ouroboros with knowledge and the serpent who tempts Eve to eat from the Tree of knowledge. Yet, the Ouroboros also finds a home carved into the medieval English Church of St. Mary and St. David or in the 9th century Book of Kells, an Irish illuminated Gospel. So, the Christians couldnt really seem to make up their minds about the Ouroborosis it Satan disguised as a tree serpent or a holy symbol of Christ?

Even while some medieval Christians couldnt decide how they felt about the Ouroboros, the Ouroboros still had a rich life in alchemical laboratories of the period. Continuing the tradition of alchemy from the ancient world, medieval alchemists associated the Ouroboros with the Philosophers Stone as the union of opposites. For medieval alchemists, the Ouroboros symbolized the organizing of the worlds chaotic energy, known to the alchemists as First Matter or prima materia.

The Ouroboross symbolic life continues on through to the Enlightenment. But, with the decline of alchemy in the late 18th century, the Ouroboros was relegated to Romantic and Victorian sances and spiritualist meetings. It was still around. But, it was no longer a symbol at the heart of human existence, a symbol that spoke to lifes cyclical nature. Now, it was just a cool magical sign. That is until tech world came along.

Artificial Intelligence is all about creating a machine that can mimic the human brains ability for cognition. AI technology has already been proven to outperform humansin some very specific ways. World champion Go player Lee Sedol decided to retire after 24 years as Go champion after being defeated by an AI computer. Chatbots use Natural Language Processing (NLP) to field customer questions so well that customers cant even tell theyre talking to a robot. Smart programs outperform humans in trading stocks. In later stages, if that technology ever becomes possible, the goal for Artificial Intelligence development may be to create a machine with its own consciousness, but we are very far from that point in history.

Enter Ouroboros programs. These emerged from a type of code sequence known as a quine. A quine doesnt have any input, and its only output is its own source code. In other words, a quine is a type of code that has no beginning, creating an output seemingly on its own. A normal computer program is basically just a set of directions that then a computer follows. So, say youre a coder and you write a program that adds numbers. You still have to provide the numbers for the computer to add even after youre all done writing the code. Quines magically dont need any numbers to start adding away. The numbers, aka the input, arent necessary for quines to power up.

The name quine actually was coined in Douglas Hofstadters 1979 Pulitzer Prize winning book Gdel, Escher, Bach. The book is a non-fiction Alice in Wonderland-esque romp through symmetry, mathematics, and art, and in it Hofstadter uses the term quining to describe when an object/number/musical note refers back to itself indirectly. So, instead of saying, Im sarah, itd be the mathematical equivalent to saying Im a medievalist. This relates to tech quines because, going back to our calculator program, quines create their self-generated input using self-reference. They take something of their own code and copy it slightly differently, so they can continue to grow.

An Ouroboros program is similar to a quine, but in addition to having no input, it also has no output. In other words, Ouroboros programs have no beginning and no end. So, again going back to our calculator program, where quines create some final solution. They add whatever numbers and find a solution. Ouroboros programs would just keep adding and adding and adding until they miraculously got back to the same number they started at, and then would do it all again. So, just like the snake version of the Ouroboros, techs Ouroboros program eats itself (so to speak). Ouroboros programs are completely self-contained. Its why theyre sometimes called self-replicating programs or quine relays. They just go on and on and on, until eventually returning to its source code creating one big loop. Quines and Ouroboros programs are useful to coders, because coders can basically just leave them alone to do their thing. Since neither program requires an input, they can do a specified task seemingly on their own.

In addition to having no beginning or end, Ouroboros programs cycle through completely different coding languages. They might begin in language X, then transition to Y, then Z, and so on until coming back to language X. Coder Yusuke Endoh created an Ouroboros program that cycled through as many as 50 different coding languages. This has made Ouroboros programs increasingly important to the development and creation of different coding languages, like Java. It also allows the Ouroboros program to function in completely different coding languages moving from Python to Ruby like it was childs play. Its as if an Ouroboros program is immediately fluent.

As computer science researchers Dario Floreano and Claudio Mattiussi have explored in their book, Bio-Inspired Artificial Intelligence: Theories, Methods, and Technologies, computer scientists have looked to the origins of biological life to find clues on how to create artificial life. The origin of biological life, they and other computer scientists believe, could act as a blueprint to creating artificial life.

Biologists trace the origin of life on Earth to a simple molecule that four billion years ago learned how to replicate itself. Once molecule-based genetic variations learned how to replicate themselves, they started competing in Darwins fun game of natural selection. The variations that were able to survive and copy themselves the best continued to replicate. The variations that werent as prolific were voted off the prehistoric island. Eventually, the first cell was formed, followed by the first organisms, then the dinosaurs, and then us humans. And, thats creation in a nutshell.

As Floreano and Mattiussi discuss in the preface of their book, mainstream AI research hasnt focused on humans own origin story to create artificial life. Mainstream AI is very good at creating algorithms and devices to solve problems even more quickly than humans. Take my earlier example of Go player Lee Sodel moved to retire because AI could problem-solve its way to victory far better than himself.

But, starting in the 1980s, AI researchers began looking to develop more human-like AI. By the turn of the millennia, this new type of AI research solidified as new artificial intelligence. The aim of AI was broadened from problem-solving to exploring cognition and other organic processes. In their article Neural Network Quines, Oscar Chang and Hod Lipson of Columbia Universitys Data Science Institute explore how Ouroboros programs and quines, similar to the first self-replicating cell, could be the first step towards developing this new, conscious AI. In addition, self-replicating programs could make AI even more human-like.

For instance, AI created using self-replicating programs, like the Ouroboros program or quines, could in theory repair or heal themselves. By replicating undamaged code to replace damaged code, quine-based AI could heal itself much like you and I can. As French mathematician David Madore explains, quines, and Ouroboros programs by extension, can repair damaged code through a process known as bootstrapping. In bootstrapping, a quine can basically hit a coder version of a restart button on its own. In other word, the quine pulls itself up by its bootstraps and starts over.

Computer scientists have also taught machines to identify sound, text, and images through whats known as deep learning models. Deep learning models are based upon programs that learn much like our brains. Computer scientists build deep learning models using neural network architecture that are borrowed directly from neurology and often employ Ouroboros programs. Neural network architectures are basically a collection of quines that work together. This creates a much stronger system. In the same way neurons fire to other neurons in our brains, these quine neural networks do the same thing. Quines work with other quines to process information more quickly.

Oscar Chang and Hod Lipson of Columbia University have in fact written about the importance of self-replication in AI. In a recent article, they looked specifically at neural network quines. Neural network quines can self-replicate and build upon what they already know, allowing AI to learn faster. Perhaps even faster than humansat least, eventually.

The Ouroboros program is in many ways the nexus of tech and theology. As professor of theology and computer science at St. Johns University in Minnesota, Noreen Herzfeld puts, it AI begs the question, what is life? How do we define it? How do we know if weve found it? What is the nature of consciousness? These philosophical questions so at the heart of AI are the same questions that religions and spiritual traditions have tried to answer for millennia, as Herzfeld points out. This is no coincidence.

In the past, religion and science intermingled more fluidly than they do today. Religion informed science, and science informed religion. Alchemy, a precursor to modern-day chemistry, was in many ways its own religion. Then, when the Enlightenment came along, science and religion were separated from each other. But, today innovations like the Ouroboros program ask us to ponder those religious and spiritual questions in a very immediate way. You cant build artificial consciousness if you dont first understand what consciousness is.

And, at the heart of AI research and its future is an ancient spiritual symbol of the universe, the Ouroboros. With its connection to ancient Egyptian religion and alchemy, the Ouroboros was and is a religious and spiritual symbol. And now, its a term applied to a coding program that could potentially eventually lead to a new kind of consciousness. Thats not an accident.

In this one symbol, religion and science again intermingle. The Ouroboros is a symbol of life and death, of time. And, perhaps thats what consciousness is all about. Because whats more human than pondering the cycles of life and death, and our place within them? And, how cool is it that, as we move towards creating artificial life, the Ouroboros symbol will be at the literal center of whatever new life we create?

Sarah Durn is a freelance writer, actor, and medievalist based in New Orleans, LA. She is the author of an upcoming book on alchemy to be published in Spring 2020.

View original post here:

The Ouroboros, From Antiquity to AI - Gizmodo

Robotics and AI leaders spearheading the battle with COVID-19 – ShareCafe

Alex Cooks13 May 2020 bloghighlighted the role of robotics and artificial intelligence (A.I.) technologies in fighting the spread of COVID-19.

In todays post, we look behind the ticker of theBetaShares Global Robotics and Artificial Intelligence ETF (ASX: RBTZ)at some of the leading companies in this space, and how they have contributed to fighting the pandemic, or are well-placed to benefit from economic, social and geo-political shifts borne out of the crisis.

The most visually obvious contribution of robotics and A.I. to combating COVID-19 has been the development of autonomous robots in healthcare such asOmrons LD-UVC, shown in Figure 1 below. Omron makes up 4.5% of RBTZs index (as at 21 August 2020). Their ground-breaking LD-UVC disinfects a particular premises by eliminating 99.9% of bacteria and viruses, both airborne and droplet, with a precise dosage of UVC energy1.

Figure 1: The LD UVC, developed by Omron Asia Pacific, in conjunction with Techmetics Robotics

Reducing the risk of human exposure to the coronavirus is one application of robotics, while scaling up our capacity for clinical testing is another critical element of the fight.

Swiss Healthcare company,Tecan Group, which makes up 5.3% of RBTZs index (as at 21 August 2020), is a market leader in laboratory instruments, reagents and smart consumables used to automate diagnostic workflow in life sciences and clinical testing laboratories.

Tecan has experienced strong demand for its products to help in the global fight against the coronavirus pandemic, resulting in a substantial increase in sales and a surge in orders in the first half of 2020.

Automation is critical for countries attempting to scale up their COVID-19 testing capacity. Tecan is aiming to double production of its laboratory automation solutions and disposable pipette tip products, and has accessed emergency stockpiles to keep up with the massive demand2.

Californian companyNvidiamakes up 9.4% of the index which RBTZ aims to track (as at 21 August 2020), making it the Funds largest holding. Nvidia is at the forefront of deep learning, artificial intelligence, and accelerated analytics. Nvidia was able to design and build the worlds seventh fastest supercomputer in three weeks, a task that normally takes many months, to be used by the U.S. Argonne National Laboratory to research ways to stop the coronavirus3.

Supercomputers are proving to be a critical tool in many facets of responding to the disease, including predicting the spread of the virus, optimising contact tracing, allocating resources and providing decisions for physicians, designing vaccines and developing rapid testing tools.

Then there are companies and products that are helping us adapt to a post-COVID world and beyond.

Keyence Corporation, from Japan, positioned itself at the forefront of several key trends in an era of increasing factory automation. In the wake of the COVID-19 crisis, factories have never faced such an urgent need to replace humans with machines to keep production lines running.

Keyence specialises in automation systems for manufacturing, food processing and pharma machine vision systems, sensors, laser markers, measuring instruments and digital microscopes. Think precision tools and quality control sensors that eliminate or detect infinitesimal assembly-line mistakes, improving throughput, and reducing wastage and costly shutdowns.

Its focus on product innovation and direct-sales model give it a competitive advantage, making it better able to adapt to new manufacturing processes and workflows while introducing high-value client solutions.

Keyence has maintained an operating profit margin >50%, has no net debt and managed to increase its dividend for the 2020 financial year, to become Japans third-largest company by market value4.

One unfortunate consequence of the virus crisis has been the straining of international relations and a deterioration of the rules-based order.AeroVironmentis a global leader in unmanned aircraft systems, or drones, and tactical missile systems. It is the number one supplier of small drones to the U.S. military. The Australian Defence Force is also an AeroVironment customer5, with spending on drone and military technology expected to increase after the release of the 2020 Defence Strategic Update in July6.

Beyond weapons systems, AeroVironment is also leading the evolution in stratospheric unmanned flight with the development of the Sunglider solar-powered high-altitude pseudo-satellite (HAPS), currently undergoing testing at Spaceport America in New Mexico. AeroVironment recently announced it was building a drone helicopter that will be deployed to Mars along with NASAs Perseverence rover in 20217. The Mars Helicopter will be the first aircraft to attempt controlled flight on another planet, in its mission searching for signs of habitable conditions and evidence of past microbial life.

A simple and cost-effective method of accessing the dynamic and fast-growing robotics and A.I. thematic is available on the ASX through theBetaShares Global Robotic and Artificial Intelligence ETF (ASX: RBTZ). The Fund invests in companies from across the globe involved in:

This includes exposure to the companies mentioned in this article, and other leaders expected to benefit from the increased adoption and utilisation of Robotics and A.I. Over the 12 months to 31 July 2020, RBTZ returned 23.7%, outperforming the broad global MSCI World Index (AUD) shares benchmark by 20.6%8.

There are risks associated with an investment in the Fund, including concentration risk, robotics and A.I. companies risk, smaller companies risk and currency risk. For more information on risks and other features of the Fund, please see the Product Disclosure Statement, available atwww.betashares.com.au.

ENDNOTES

More:

Robotics and AI leaders spearheading the battle with COVID-19 - ShareCafe

AI SciFi Short Rise Is Being Turned Into a Movie – Gizmodo

Photo courtesy Concept Rise

Rise, the impressive robot uprising short film starring the late Anton Yelchin, is being adapted into a movie... with the original director on board to helm the production.

The five-minute film takes an updated version of the special effects from A.I. with the storyline of The Second Renaissance from The Animatrix. Its all about a dystopian future where artificially intelligent robots are hunted and killed, after the government determined they were becoming too emotional and, therefore, human. Unfortunately, its not working, as Yelchins A.I. helps trigger a war for the future of their species.

David Karlak, who directed the original short, has signed on to direct the feature-length adaptation. Its being produced by Johnny Lin (American Made) and Brian Oliver (Hacksaw Ridge, Black Swan), with original writers Patrick Melton and Marcus Dunstan returning to pen the script. No word who would replace Yelchin, who sadly passed away last year, but I am hoping Rufus Sewell (The Man in the High Castle) reprises his role as the government interrogator. Ill watch him in anything.

You can watch the original short film below.

[The Hollywood Reporter]

More:

AI SciFi Short Rise Is Being Turned Into a Movie - Gizmodo

Quantum computing, AI, China, and synthetics highlighted in 2020 Tech Trends report – VentureBeat

The worlds tech industry will be shaped by China, artificial intelligence, cancel culture, and other key trends, according to the Future Today Institutes 2020 Tech Trends Report.

Now in its thirteenth year, the document is put together by the Future Today Institute and director Amy Webb, who is also a professor at New York Universitys Stern School of Business. The report attempts to recognize connections between tech and future uncertainties, like the outcome of the 2020 U.S. presidential election, as well as the spread of diseases like COVID-19.

Among major trends in the report, 2020 is expected to be the synthetic decade.

Soon we will produce designer molecules in a range of host cells on demand and at scale, which will lead to transformational improvements in vaccine production, tissue production, and medical treatments. Scientists will start to build entire human chromosomes, and they will design programmable proteins, the report reads.

Augmentation of senses like hearing and sight, social media scaremongering, new ways to measure trust, and Chinas role in the growth of AI are also listed among key takeaways.

Artificial intelligence is again the first item highlighted on the list, and the tech Webb says is sparking a third wave of computing comes with positives, like the role AlphaFold can play in discovering cures for diseases, as well as negatives, like AIscurrent impact on the criminal justice system.

Tech giants in the U.S. and China like Amazon, Facebook, Google, and Microsoft in the United States and Tencent and Baidu in China continue to deliver the greatest impact. Webb predicts how these companies will shape the world in her 2019 bookThe Big Nine.

Those nine companies drive the majority of research, funding, government involvement, and consumer-grade applications of AI. University researchers and labs rely on these companies for data, tools, and funding, the report reads. Big Nine AI companies also wield huge influence over AI mergers and acquisitions, funding AI startups, and supporting the next generation of developers.

Other AI trends include synthetic data, a military-tech industrial complex, and systems made to recognize people.

Visit the Future Today Institute website to read the full report, which flags trends that require immediate action and highlights trends by industry.

Webb urges readers to digest the 366-page report in multiple sittings, rather than trying to read it all at once. She typically debuts the report with a presentation to thousands at the SXSW conference in Austin, Texas, but the conference was cancelled due to COVID-19.

Visit link:

Quantum computing, AI, China, and synthetics highlighted in 2020 Tech Trends report - VentureBeat

NEC and Kagome to Provide AI-enabled Services That Improve Tomato Yields – Business Wire

TOKYO--(BUSINESS WIRE)--NEC Corporation today announced the conclusion of a strategic partnership agreement with Kagome Co., Ltd. to launch agricultural management support services utilizing AI for leading tomato processing companies.

The new service uses NECs AI-enabled agricultural ICT platform, CropScope, to visualize tomato growth and soil conditions based on sensor data and satellite images, and to provide farming management recommendation services. This AI enables the service to provide data on the best timing and amounts of irrigation and fertilizer for healthy crops. As a result, farms are able to achieve stable yields and lower costs, while practicing environmentally sustainable agriculture without depending on the skill of individual growers.

Tomato processing companies can obtain a comprehensive understanding of the most effective growing conditions for tomato production on their own farms, as well as their contract growers. Also, they can optimally manage crop harvest orders across all fields based on objective data, which helps to reduce yield loss and improve productivity.

NEC and Kagome began agricultural collaboration in 2015, and by 2019 they had conducted demonstrations in regions that include Portugal, Australia and the USA. An AI farming experiment in Portugal in 2019 showed that the amount of fertilizer used for the trial was approximately 20% less than the average amount used in general, yielding 127 tons of tomatoes per hectare, approximately 1.3 times that of the average Portuguese grower, and almost the same as that of skilled growers.

Kagome will establish a Smart Agri Division in April 2020, first targeting customers in Europe, then aiming to expand the business to worldwide markets.

Kagome has been developing agricultural management support technologies using big data in collaboration with NEC since 2015, with the aim of realizing environmentally friendly and highly profitable agricultural management in the cultivation of tomatoes for processing on a global basis, said Kengo Nakata, General Manager, Smart Agri Division, Kagome. By combining Kagomes farming know-how with NEC's AI technology, we will realize sustainable agriculture, he added.

NEC is pleased to have signed a strategic partnership agreement with Kagome, said Masamitsu Kitase, General Manager, Corporate Business Development Division, NEC. NEC aims to realize a sustainable agriculture that can respond flexibly to global social issues on climate change and food safety, he added.

About NEC Corporation: For more information, visit NEC at http://www.nec.com.

Continued here:

NEC and Kagome to Provide AI-enabled Services That Improve Tomato Yields - Business Wire

Is the power sector seeing the beginnings of an AI investment boom? – Power Technology

The power industry is seeing an increase in artificial intelligence (AI) investment across several key metrics, according to an analysis of GlobalData data.

AI is gaining an increasing presence across multiple sectors, with top companies completing more AI deals, hiring for more AI roles and mentioning it more frequently in company reports at the start of 2021.

GlobalDatas thematic approach to sector activity seeks to group key company information on hiring, deals, patents and more by topic to see which companies are best placed to weather the disruptions coming to their industries.

These themes, of which AI is one, are best thought of as any issue that keeps a CEO awake at night, and by tracking them it becomes possible to ascertain which companies are leading the way on specific issues and which are dragging their heels.

One area in which there has been some decrease in AI investment among power companies is in the number of deals. GlobalData show that there were 13 AI deals in power in the first quarter of 2019. By the first quarter of 2021, that number was six.

Hiring patterns within the power sector as a whole are pointing towards an increase in the level of attention being shown to AI-related roles. There was a monthly average of 1,008 actively advertised-for open AI roles within the industry in April this year, up from a monthly average of 669 in December 2020.

It is also apparent from an analysis of keyword mentions in financial filings that AI is occupying the minds of power companies to a lesser extent.

There have been 164 mentions of AI across the filings of the biggest power companies so far in 2021, equating to 7.3% of all tech theme mentions. This figure represents a decrease compared to 2016, when AI represented 12.9% of the tech theme mentions in company filings.

AI is increasingly fueling innovation in the power sector, particularly in the past six years. There were, on average, 34 power patents related to AI granted each year from 2000 to 2014. That figure has risen to an average of 188 patents since then, reaching 230 in 2020.

Moisture Meters and Humidity Sensors for Measuring Water Vapour in Power Plants

Fabric Expansion Joints, Metal Expansion Joints and Elastomer Expansion Joints

Moisture Meters and Humidity Sensors for Measuring Water Vapour in Power Plants

28 Aug 2020

Fabric Expansion Joints, Metal Expansion Joints and Elastomer Expansion Joints

28 Aug 2020

More:

Is the power sector seeing the beginnings of an AI investment boom? - Power Technology

AI could help with the next pandemicbut not with this one – MIT Technology Review

It was an AI that first saw it coming, or so the story goes. On December 30, an artificial-intelligence company called BlueDot, which uses machine learning to monitor outbreaks of infectious diseases around the world, alerted clientsincluding various governments, hospitals, and businessesto an unusual bump in pneumonia cases in Wuhan, China. It would be another nine days before the World Health Organization officially flagged what weve all come to know as Covid-19.

BlueDot wasnt alone. An automated service called HealthMap at Boston Childrens Hospital also caught those first signs. As did a model run by Metabiota, based in San Francisco. That AI could spot an outbreak on the other side of the world is pretty amazing, and early warnings save lives.

You can read all of ourcoverage of the coronavirus/Covid-19 outbreakfor free, and also sign up for ourcoronavirus newsletter. But pleaseconsider subscribingto support our nonprofit journalism..

But how much has AI really helped in tackling the current outbreak? Thats a hard question to answer. Companies like BlueDot are typically tight-lipped about exactly who they provide information to and how it is used. And human teams say they spotted the outbreak the same day as the AIs. Other projects in which AI is being explored as a diagnostic tool or used to help find a vaccine are still in their very early stages. Even if they are successful, it will take timepossibly monthsto get those innovations into the hands of the health-care workers who need them.

The hype outstrips the reality. In fact, the narrative that has appeared in many news reports and breathless press releasesthat AI is a powerful new weapon against diseasesis only partly true and risks becoming counterproductive. For example, too much confidence in AIs capabilities could lead to ill-informed decisions that funnel public money to unproven AI companies at the expense of proven interventions such as drug programs. Its also bad for the field itself: overblown but disappointed expectations have led to a crash of interest in AI, and consequent loss of funding, more than once in the past.

So heres a reality check: AI will not save us from the coronaviruscertainly not this time. But theres every chance it will play a bigger role in future epidemicsif we make some big changes. Most wont be easy. Some we wont like.

There are three main areas where AI could help: prediction, diagnosis, and treatment.

Prediction

Companies like BlueDot and Metabiota use a range of natural-language processing (NLP) algorithms to monitor news outlets and official health-care reports in different languages around the world, flagging whether they mention high-priority diseases, such as coronavirus, or more endemic ones, such as HIV or tuberculosis. Their predictive tools can also draw on air-travel data to assess the risk that transit hubs might see infected people either arriving or departing.

The results are reasonably accurate. For example, Metabiotas latest public report, on February 25, predicted that on March 3 there would be 127,000 cumulative cases worldwide. It overshot by around 30,000, but Mark Gallivan, the firms director of data science, says this is still well within the margin of error. It also listed the countries most likely to report new cases, including China, Italy, Iran, and the US. Again: not bad.

Sign up for The Algorithm artificial intelligence, demystified

Others keep an eye on social media too. Stratifyd, a data analytics company based in Charlotte, North Carolina, is developing an AI that scans posts on sites like Facebook and Twitter and cross-references them with descriptions of diseases taken from sources such as the National Institutes of Health, the World Organisation for Animal Health, and the global microbial identifier database, which stores genome sequencing information.

Work by these companies is certainly impressive. And it goes to show how far machine learning has advanced in recent years. A few years ago Google tried to predict outbreaks with its ill-fated Flu Tracker, which was shelved in 2013 when it failed to predict that years flu spike. What changed? It mostly comes down to the ability of the latest software to listen in on a much wider range of sources.

Unsupervised machine learning is also key. Letting an AI identify its own patterns in the noise, rather than training it on preselected examples, highlights things you might not have thought to look for. When you do prediction, you're looking for new behavior, says Stratifyds CEO, Derek Wang.

But what do you do with these predictions? The initial prediction by BlueDot correctly pinpointed a handful of cities in the viruss path. This could have let authorities prepare, alerting hospitals and putting containment measures in place. But as the scale of the epidemic grows, predictions become less specific. Metabiotas warning that certain countries would be affected in the following week might have been correct, but it is hard to know what to do with that information.

Whats more, all these approaches will become less accurate as the epidemic progresses, largely because reliable data of the sort that AI needs to feed on has been hard to get about Covid-19. News sources and official reports offer inconsistent accounts. There has been confusion over symptoms and how the virus passes between people. The media may play things up; authorities may play things down. And predicting where a disease may spread from hundreds of sites in dozens of countries is a far more daunting task than making a call on where a single outbreak might spread in its first few days. Noise is always the enemy of machine-learning algorithms, says Wang. Indeed, Gallivan acknowledges that Metabiotas daily predictions were easier to make in the first two weeks or so.

One of the biggest obstacles is the lack of diagnostic testing, says Gallivan. Ideally, we would have a test to detect the novel coronavirus immediately and be testing everyone at least once a day, he says. We also dont really know what behaviors people are adoptingwho is working from home, who is self-quarantining, who is or isnt washing handsor what effect it might be having. If you want to predict whats going to happen next, you need an accurate picture of whats happening right now.

Its not clear whats going on inside hospitals, either. Ahmer Inam at Pactera Edge, a data and AI consultancy, says prediction tools would be a lot better if public health data wasnt locked away within government agencies as it is in many countries, including the US. This means an AI must lean more heavily on readily available data like online news. By the time the media picks up on a potentially new medical condition, it is already too late, he says.

But if AI needs much more data from reliable sources to be useful in this area, strategies for getting it can be controversial. Several people I spoke to highlighted this uncomfortable trade-off: to get better predictions from machine learning, we need to share more of our personal data with companies and governments.

Darren Schulte, an MD and CEO of Apixio, which has built an AI to extract information from patients records, thinks that medical records from across the US should be opened up for data analysis. This could allow an AI to automatically identify individuals who are most at risk from Covid-19 because of an underlying condition. Resources could then be focused on those people who need them most. The technology to read patient records and extract life-saving information exists, says Schulte. The problem is that these records are split across multiple databases and managed by different health services, which makes them harder to analyze. Id like to drop my AI into this big ocean of data, he says. But our data sits in small lakes, not a big ocean.

Health data should also be shared between countries, says Inam: Viruses dont operate within the confines of geopolitical boundaries. He thinks countries should be forced by international agreement to release real-time data on diagnoses and hospital admissions, which could then be fed into global-scale machine-learning models of a pandemic.

Of course, this may be wishful thinking. Different parts of the world have different privacy regulations for medical data. And many of us already balk at making our data accessible to third parties. New data-processing techniques, such as differential privacy and training on synthetic data rather than real data, might offer a way through this debate. But this technology is still being finessed. Finding agreement on international standards will take even more time.

For now, we must make the most of what data we have. Wangs answer is to make sure humans are around to interpret what machine-learning models spit out, making sure to discard predictions that dont ring true. If one is overly optimistic or reliant on a fully autonomous predictive model, it will prove problematic, he says. AIs can find hidden signals in the data, but humans must connect the dots.

Early diagnosis

As well as predicting the course of an epidemic, many hope that AI will help identify people who have been infected. AI has a proven track record here. Machine-learning models for examining medical images can catch early signs of disease that human doctors miss, from eye disease to heart conditions to cancer. But these models typically require a lot of data to learn from.

A handful of preprint papers have been posted online in the last few weeks suggesting that machine learning can diagnose Covid-19 from CT scans of lung tissue if trained to spot telltale signs of the disease in the images. Alexander Selvikvg Lundervold at the Western Norway University of Applied Sciences in Bergen, Norway, who is an expert on machine learning and medical imaging, says we should expect AI to be able to detect signs of Covid-19 in patients eventually. But it is unclear whether imaging is the way to go. For one thing, physical signs of the disease may not show up in scans until some time after infection, making it not very useful as an early diagnostic.

AP Images

Whats more, since so little training data is available so far, its hard to assess the accuracy of the approaches posted online. Most image recognition systemsincluding those trained on medical imagesare adapted from models first trained on ImageNet, a widely used data set encompassing millions of everyday images. To classify something simple that's close to ImageNet data, such as images of dogs and cats, can be done with very little data, says Lundervold. Subtle findings in medical images, not so much.

Thats not to say it wont happenand AI tools could potentially be built to detect early stages of disease in future outbreaks. But we should be skeptical about many of the claims of AI doctors diagnosing Covid-19 today. Again, sharing more patient data will help, and so will machine-learning techniques that allow models to be trained even when little data is available. For example, few-shot learning, where an AI can learn patterns from only a handful of results, and transfer learning, where an AI already trained to do one thing can be quickly adapted to do something similar, are promising advancesbut still works in progress.

Cure-all

Data is also essential if AI is to help develop treatments for the disease. One technique for identifying possible drug candidates is to use generative design algorithms, which produce a vast number of potential results and then sift through them to highlight those that are worth looking at more closely. This technique can be used to quickly search through millions of biological or molecular structures, for example.

SRI International is collaborating on such an AI tool, which uses deep learning to generate many novel drug candidates that scientists can then assess for efficacy. This is a game-changer for drug discovery, but it can still take many months before a promising candidate becomes a viable treatment.

In theory, AIs could be used to predict the evolution of the coronavirus too. Inam imagines running unsupervised learning algorithms to simulate all possible evolution paths. You could then add potential vaccines to the mix and see if the viruses mutate to develop resistance. This will allow virologists to be a few steps ahead of the viruses and create vaccines in case any of these doomsday mutations occur, he says.

Its an exciting possibility, but a far-off one. We dont yet have enough information about how the virus mutates to be able to simulate it this time around.

In the meantime, the ultimate barrier may be the people in charge. What Id most like to change is the relationship between policymakers and AI, says Wang. AI will not be able to predict disease outbreaks by itself, no matter how much data it gets. Getting leaders in government, businesses, and health care to trust these tools will fundamentally change how quickly we can react to disease outbreaks, he says. But that trust needs to come from a realistic view of what AI can and cannot do nowand what might make it better next time.

Making the most of AI will take a lot of data, time, and smart coordination between many different people. All of which are in short supply right now.

More:

AI could help with the next pandemicbut not with this one - MIT Technology Review

Global Geospatial Solutions & Services Market Artificial Intelligence (AI), Cloud, Automation, Internet of Things (IoT), and Miniaturization of…

The global geospatial solutions & services market accounted for US$ 238.5 billion in 2019 and is estimated to be US$ 1013.7 billion by 2029 and is anticipated to register a CAGR of 15.7%

Covina, CA, Aug. 04, 2020 (GLOBE NEWSWIRE) -- The report"Global Geospatial Solutions & Services Market, By Solution Type (Hardware, Software, and Service), By Technology (Geospatial Analytics, GNSS & Positioning, Scanning, and Earth Observation), By End-user (Utility, Business, Transportation, Defence & Intelligence, Infrastructural Development, Natural Resource, and Others), By Application (Surveying & Mapping, Geovisualization, Asset Management, Planning & Analysis, and Others), and By Region (North America, Europe, Asia Pacific, Latin America, and the Middle East & Africa) - Trends, Analysis and Forecast till 2029.

Key Highlights:

Request Free Sample of this Business Intelligence Report @https://www.prophecymarketinsights.com/market_insight/Insight/request-sample/4412

Analyst View:

Geospatial technology comprises GIS (geographical information systems), GPS (global positioning systems), and RS (remote sensing), a technology that provides a radically different way of producing and using maps that are required to manage communities and industries. Developed economies are expected to provide lucrative opportunities to the industry for geospatial solutions. The application of geospatial techniques across the globe has witnessed a steady growth over the past decades, owing to simple accessibility of geospatial technology in advanced nations such as the U.S. and Canada, thus further driving growth of the target the market. Moreover, rising smart city initiatives in emerging countries have resulted in the growing need for geospatial technologies for use in 3D urban mapping, monitoring and mapping natural resources. Increasing adoption of IoT, big data analysis, and Artificial Intelligence (AI) across the globe is projected to create profitable opportunities for global geospatial solutions & services market throughout the forecast period.

Browse 60 market data tables* and 35figures* through 140 slides and in-depth TOC on Global Geospatial Solutions & Services Market, By Solution Type (Hardware, Software, and Service), By Technology (Geospatial Analytics, GNSS & Positioning, Scanning, and Earth Observation), By End-user (Utility, Business, Transportation, Defence & Intelligence, Infrastructural Development, Natural Resource, and Others), By Application (Surveying & Mapping, Geovisualization, Asset Management, Planning & Analysis, and Others), and By Region (North America, Europe, Asia Pacific, Latin America, and the Middle East & Africa) - Trends, Analysis and Forecast till 2029

Ask for a Discount on this Report @https://www.prophecymarketinsights.com/market_insight/Insight/request-discount/4412

Key Market Insights from the report:

The global geospatial solutions & services market accounted for US$ 238.5 billion in 2019 and is estimated to be US$ 1013.7 billion by 2029 and is anticipated to register a CAGR of 15.7%. The market report has been segmented on the basis of solution type, technology, end-user, application, and region.

To know the upcoming trends and insights prevalent in this market, click the link below:

https://www.prophecymarketinsights.com/market_insight/Global-Geospatial-Solutions-&-Services-Market-4412

Competitive Landscape:

The prominent player operating in the global geospatial solutions & services market includes HERE Technologies, Esri (US), Hexagon (Sweden), Atkins PLC, Pitney Bowes, Topcon Corporation, DigitalGlobe, Inc. (Maxar Group), General Electric, Harris Corporation (US), and Google.

The market provides detailed information regarding the industrial base, productivity, strengths, manufacturers, and recent trends which will help companies enlarge the businesses and promote financial growth. Furthermore, the report exhibits dynamic factors including segments, sub-segments, regional marketplaces, competition, dominant key players, and market forecasts. In addition, the market includes recent collaborations, mergers, acquisitions, and partnerships along with regulatory frameworks across different regions impacting the market trajectory. Recent technological advances and innovations influencing the global market are included in the report.

Story continues

About Prophecy Market Insights

Prophecy Market Insights is specialized market research, analytics, marketing/business strategy, and solutions that offers strategic and tactical support to clients for making well-informed business decisions and to identify and achieve high-value opportunities in the target business area. We also help our clients to address business challenges and provide the best possible solutions to overcome them and transform their business.

Some Important Points Answered in this Market Report Are Given Below:

Key Topics Covered

Browse Related Reports:

More here:

Global Geospatial Solutions & Services Market Artificial Intelligence (AI), Cloud, Automation, Internet of Things (IoT), and Miniaturization of...

AI is for the Birds in a New Computer Science Project | Newsroom – UC Merced University News

The Soundscapes to Landscapes (S2L): Monitoring Animal Biodiversity from Space Using Citizen Scientists program is supported by $1.1 million over three years through NASAs Citizen Science for Earth Systems program. It uses citizen scientists to deploy the AudioMoths. Other birders knowledgeable in bird calls will annotate a subset of the recordings, which serve as the training data for the AI models.

This summer, Newsam also received a $90,000, one-year AI for Earth Innovation grant from Global Wildlife Conservation in partnership with Microsoft. The nonprofit relies on research to work with local communities to address the root causes of threats to wildlife.

Newsams is one of only five projects funded out of 135 applications. The grant supports AI projects that can scale quickly. The research will benefit many other projects because it is open source.

For Newsam, there are many questions about processing the data, and many technical challenges. The recordings have biophony, geophony and anthrophony noise, and the bird calls are often faint. Some species have different calls for different communications: warning calls, mating calls and others. Which one should the AI focus on?

Birds often modify their calls by changing frequency, for example, if other birds are also calling, Newsam said. I am learning a lot about bird calls.

Baligar hears the calls as something more than just bird communication.

I like to think of birds as musical instruments, he said. All the violins are orange crowned warblers, but no two violins are the same. A bird song plays different notes, and every bird likes to play a song differently every time.

Each AudioMoth gathers about 2,000 minutes of data per site. So far, the team has more than 500,000 minute-long recordings more than 8,000 hours of data from over 600 locations and terrabytes of data to manage.

However, training the AI model requires a lot of annotated data.

Deep learning is data hungry, Baligar said. The more data the better. On average, we have just 650 training clips per bird species, which is not a lot.

Newsam, who co-founded the Spatial Analysis Research Center (SpARC) at UC Merced, is an expert in image analysis and understanding.

Image and audio are sensorily very different but in the end, it is just data data that we are turning into information through several processes, he said.

Baligar did not set out to study sound or bird calls when he was a masters student. He was more interested in time-series questions. Now, audio over time is the focus of his dissertation, and potentially the basis for a company he hopes to launch after graduation.

Computer science and environmental science are two of UC Merceds growing number of strengths, said Professor Josh Viers, director of the Center for Information Technology Research in the Interest of Society at UC Merced.

Professor Newsams research is indicative of the progress UC Merced has made in attracting top talent and solving important global problems, Viers said. Shawn is a leader in developing computer science tools that interpret and integrate massive amounts of information, from Earth imagery to sound recordings, and his research is pushing the envelope on innovation in sustainability and technology. It is really exciting to see this example of artificial intelligence used to benefit wildlife conservation efforts.

Future work for the team includes trying to identify individual birds and be able to track them over their range.

If we can overcome some of the modeling challenges, Newsam said, we might be able to replace satellites with much more fine-scaled information about all kinds of wildlife.

Read the original here:

AI is for the Birds in a New Computer Science Project | Newsroom - UC Merced University News

Kasparov: ‘Embrace’ the AI revolution – BBC News


BBC News
Kasparov: 'Embrace' the AI revolution
BBC News
Humans should embrace the change smart machines offer society, says former chess world champion Garry Kasparov. In a speech at Def Con in Las Vegas he said the rise of artificially intelligent machines would not be a huge threat to humanity. However ...

and more »

Read the rest here:

Kasparov: 'Embrace' the AI revolution - BBC News