Page 133«..1020..132133134135..140150..»

Category Archives: Ai

Artificial intelligence is already everywhere, we need to adapt – The National

Posted: May 20, 2021 at 4:42 am

Any smartphone owner or Google user is already intimately connected with artificial intelligence, but knowing what that means is a different matter. AIs ubiquity has not yet translated to a corresponding understanding of what and how this revolutionary technology system works, according to one of the pioneers in the industry.

I think the challenge for us is it's both everywhere and it's kind of receding into the background and people are not necessarily aware, Sir Nigel Shadbolt, one of the UKs pre-eminent computer scientists tells The National from his home in Oxford.

AI is a totally pervasive technology. It literally has become a new utility. We don't recognise it that way but the supercomputers we carry around in our pockets - our mobile phones - are running all sorts of AI-inspired and directly AI implemented algorithms to recognise your voice or recognise a photo of a face in a photo you've just taken and label it, or when it's reaching back into the cloud services to decide what to recommend to you, or how to route you efficiently to your next meeting. These things are all running."

The professor in computer science at Oxford University likens our relationship with AI to that with electricity: were highly dependent on it without a full understanding of the complex engineering feats behind a power grid.

Mainstream AI is a process of combining datasets and algorithms, or rules, to develop predictive patterns based on the data provided. To the purist, AI is a machine or algorithm which can perform tasks that would ordinarily require human intelligence.

AI is used for geographical navigation, Google searches, video-gaming and inventory management. Perhaps most universally, AI is used as recommender systems in social media platforms, on-demand video streaming services and online shopping platforms to tailor content and suggestions for users according to historical preferences.

The more information is gathered the more machine learning accelerates.

There is a duty for us to explain fundamentally what the basic principles are and what the issues are from the point of view, of safety, of fairness, of equity, availability of access, these have a moral dimension to them, says Sir Shadbolt.

For many people, artificial intelligence conjures up images of robotic humanoids or complex technology used by big tech giants to influence us. While this may be accurate in part, the fundamental misperceptions are widespread.

I sometimes reflect on the fact we might be moving back to almost an animistic culture where we imagine there's kind of a magic in our devices we don't need to worry about, Mr Shadbolt tells The National.

He has worked alongside Sir Tim Berners-Lee, inventor of the worldwide web, since 2009 and in 2012 the duo both went on to set up the Open Data Institute, which works with companies and governments to build an open, trustworthy data ecosystem.

Data is kind of an infrastructure just like your roads and your power grid but you can't see it, it's invisible in a certain sense. But you know it's important and building that kind of infrastructure is hugely important, says Mr Shadbolt, who was knighted in 2013 for his services to science and engineering.

Since the ODI was established, many national governments, regional authorities and public and private companies have gone on to publish their data online. In some countries, like France, the commitment to open public data is now enshrined in law.

The pandemic naturally pushed to the fore the importance of data, from the UK government's dashboard on hospital admissions rates to its track and trace system, information gathering and sharing was paramount in overcoming the virus.

With such pervasive influence on our lives, Mr Shadbolt says there is a growing renaissance of interest in the field of ethics and AI.

Civil rights groups have called for the banning of facial recognition software for fears that the system encroaches on privacy through mass surveillance as well as reinforcing racial discrimination. There are also other concerns that these complex learning models can be fooled.

Earlier this year a new Institute for Ethics in AI was created in Oxford University with Mr Shadbolt as its chair. He says the institutes aim is to examine the fairness and transparency of AIs many uses so that its power is used to empower and not oppress us.

The algorithms and the data of scale can be really transformational. But on the other hand, we need to reflect on the fact that there'll be two questions we've been talking about, about just how is that data used? And is it fair representation and have has the population consented?

Co-author of The Digital Ape: How to live (in peace) with smart machines, Mr Shadbolt says it is an ongoing conversation with science technologist and engineers on the one hand and legislators and ethicists on the other. Because these things at the end of the day, express our values, what we think are important to seek to preserve in the societies we build, he points out.

The Facebook-Cambridge Analytica scandal and the numerous online data breaches of other companies, has undoubtedly contributed to increasing public awareness over the perils of handing over personal information. A recent study by Penn State University researchers in the US suggests that users can become more willing to give over information when AIs offer or ask for help from users.

Nevertheless, fears around the uses of AI extend beyond its access to personal data to forecasting what a truly intelligent machine might be capable of. Scientists at the Center for Humans and Machines at the Max Planck Institute for Human Development in Berlin recently said that human control of any super-intelligent AI would be impossible.

AI has been steadily developing since the days of World War II and the code-breaking Turing Machine. It took a major leap forward in 1996 when the world chess champion, Garry Kasparov, said he could smell a new kind of intelligence across the table from the IBM supercomputer, Deep Blue.

Companies that are more open to adopting AI are likely to do better

David Egan

Mr Kasparov's defeat is often held up as a symbolic turning point in AI catching up with human intelligence. Nineteen years later, the power of AI made an exponential leap forward when AlphaGo became the first computer program to defeat a professional human player at Go, the incredibly complex and challenging 3000-year-old Chinese game.

The pandemic has accelerated the adoption of AI across sectors, particularly in healthcare and pushed it more towards becoming a necessity. In England AI systems were used to screen patients lung scans for Covid-19 and to sift through hundreds of research papers being published on the new virus.

AI received a battlefield promotion as the crisis forced the pace of innovation and adoption, said David Egan, a senior analyst at Columbia Threadneedle Investment, at a recent forum to discuss investor opportunities in the field.

Companies that are more open to adopting AI are likely to do better and the benefit to those companies will compound at an exponential rate each year.

Having surveyed the field for decades, Mr Shadbolt thinks we are now at the time to take hold of this "great opportunity" while also taking stock of the "bigger questions".

Technical development has to go hand in hand with an appreciation of our values, why we're doing this, what kind of society we want to build, where we want decision making to reside, where the value of all this insight actually ends up landing.

Read the rest here:

Artificial intelligence is already everywhere, we need to adapt - The National

Posted in Ai | Comments Off on Artificial intelligence is already everywhere, we need to adapt – The National

We need to design distrust into AI systems to make them safer – MIT Technology Review

Posted: at 4:42 am

Its interesting that youre talking about how, in these kinds of scenarios, you have to actively design distrust into the system to make it more safe.

Yes, thats what you have to do. Were actually trying an experiment right now around the idea of denial of service. We dont have results yet, and were wrestling with some ethical concerns. Because once we talk about it and publish the results, well have to explain why sometimes you may not want to give AI the ability to deny a service either. How do you remove service if someone really needs it?

But heres an example with the Tesla distrust thing. Denial of service would be: I create a profile of your trust, which I can do based on how many times you deactivated or disengaged from holding the wheel. Given those profiles of disengagement, I can then model at what point you are fully in this trust state. We have done this, not with Tesla data, but our own data. And at a certain point, the next time you come into the car, youd get a denial of service. You do not have access to the system for X time period.

Its almost like when you punish a teenager by taking away their phone. You know that teenagers will not do whatever it is that you didnt want them to do if you link it to their communication modality.

The other methodology weve explored is roughly called explainable AI, where the system provides an explanation with respect to some of its risks or uncertainties. Because all of these systems have uncertaintynone of them are 100%. And a system knows when its uncertain. So it could provide that as information in a way a human can understand, so people will change their behavior.

As an example, say Im a self-driving car, and I have all my map information, and I know certain intersections are more accident prone than others. As we get close to one of them, I would say, Were approaching an intersection where 10 people died last year. You explain it in a way where it makes someone go, Oh, wait, maybe I should be more aware.

The negatives are really linked to bias. Thats why I always talk about bias and trust interchangeably. Because if Im overtrusting these systems and these systems are making decisions that have different outcomes for different groups of individualssay, a medical diagnosis system has differences between women versus menwere now creating systems that augment the inequities we currently have. Thats a problem. And when you link it to things that are tied to health or transportation, both of which can lead to life-or-death situations, a bad decision can actually lead to something you cant recover from. So we really have to fix it.

The positives are that automated systems are better than people in general. I think they can be even better, but I personally would rather interact with an AI system in some situations than certain humans in other situations. Like, I know it has some issues, but give me the AI. Give me the robot. They have more data; they are more accurate. Especially if you have a novice person. Its a better outcome. It just might be that the outcome isnt equal.

Its important to me because I can identify times in my life where someone basically provided me access to engineering and computer science. I didnt even know it was a thing. And thats really why later on, I never had a problem with knowing that I could do it. And so I always felt that it was just my responsibility to do the same thing for those who have done it for me. As I got older as well, I noticed that there were a lot of people that didnt look like me in the room. So I realized: Wait, theres definitely a problem here, because people just dont have the role models, they dont have access, they dont even know this is a thing.

And why its important to the field is because everyone has a difference of experience. Just like Id been thinking about human-robot interaction before it was even a thing. It wasnt because I was brilliant. It was because I looked at the problem in a different way. And when Im talking to someone who has a different viewpoint, its like, Oh, lets try to combine and figure out the best of both worlds.

Airbags kill more women and kids. Why is that? Well, Im going to say that its because someone wasnt in the room to say, Hey, why dont we test this on women in the front seat? Theres a bunch of problems that have killed or been hazardous to certain groups of people. And I would claim that if you go back, its because you didnt have enough people who could say Hey, have you thought about this? because theyre talking from their own experience and from their environment and their community.

If you think about coding and programming, pretty much everyone can do it. There are so many organizations now like Code.org. The resources and tools are there. I would love to have a conversation with a student one day where I ask, Do you know about AI and machine learning? and they say, Dr. H, Ive been doing that since the third grade! I want to be shocked like that, because that would be wonderful. Of course, then Id have to think about what is my next job, but thats a whole other story.

But I think when you have the tools with coding and AI and machine learning, you can create your own jobs, you can create your own future, you can create your own solution. That would be my dream.

See original here:

We need to design distrust into AI systems to make them safer - MIT Technology Review

Posted in Ai | Comments Off on We need to design distrust into AI systems to make them safer – MIT Technology Review

AI Technique Ushers In New Era of High-Resolution Simulations of the Universe – SciTechDaily

Posted: at 4:42 am

Simulations of a region of space 100 million light-years square. The leftmost simulation ran at low resolution. Using machine learning, researchers upscaled the low-res model to create a high-resolution simulation (right). That simulation captures the same details as a conventional high-res model (middle) while requiring significantly fewer computational resources. Credit: Y. Li et al./Proceedings of the National Academy of Sciences 2021

Using neural networks, researchers can now simulate universes in a fraction of the time, advancing the future of physics research.

A universe evolves over billions upon billions of years, but researchers have developed a way to create a complex simulated universe in less than a day. The technique, recently published in the journal Proceedings of the National Academy of Sciences, brings together machine learning, high-performance computing, and astrophysics and will help to usher in a new era of high-resolution cosmology simulations.

Cosmological simulations are an essential part of teasing out the many mysteries of the universe, including those of dark matter and dark energy. But until now, researchers faced the common conundrum of not being able to have it all simulations could focus on a small area at high resolution, or they could encompass a large volume of the universe at low resolution.

Carnegie Mellon University Physics Professors Tiziana Di Matteo and Rupert Croft, Flatiron Institute Research Fellow Yin Li, Carnegie Mellon Ph.D. candidate Yueying Ni, University of California Riverside Professor of Physics and Astronomy Simeon Bird and University of California Berkeleys Yu Feng surmounted this problem by teaching a machine learning algorithm based on neural networks to upgrade a simulation from low resolution to super resolution.

Cosmological simulations need to cover a large volume for cosmological studies, while also requiring high resolution to resolve the small-scale galaxy formation physics, which would incur daunting computational challenges. Our technique can be used as a powerful and promising tool to match those two requirements simultaneously by modeling the small-scale galaxy formation physics in large cosmological volumes, said Ni, who performed the training of the model, built the pipeline for testing and validation, analyzed the data and made the visualization from the data.

The trained code can take full-scale, low-resolution models and generate super-resolution simulations that contain up to 512 times as many particles. For a region in the universe roughly 500 million light-years across containing 134 million particles, existing methods would require 560 hours to churn out a high-resolution simulation using a single processing core. With the new approach, the researchers need only 36 minutes.

The results were even more dramatic when more particles were added to the simulation. For a universe 1,000 times as large with 134 billion particles, the researchers new method took 16 hours on a single graphics processing unit. Using current methods, a simulation of this size and resolution would take a dedicated supercomputer months to complete.

Reducing the time it takes to run cosmological simulations holds the potential of providing major advances in numerical cosmology and astrophysics, said Di Matteo. Cosmological simulations follow the history and fate of the universe, all the way to the formation of all galaxies and their black holes.

Scientists use cosmological simulations to predict how the universe would look in various scenarios, such as if the dark energy pulling the universe apart varied over time. Telescope observations then confirm whether the simulations predictions match reality.

With our previous simulations, we showed that we could simulate the universe to discover new and interesting physics, but only at small or low-res scales, said Croft. By incorporating machine learning, the technology is able to catch up with our ideas.

Di Matteo, Croft and Ni are part of Carnegie Mellons National Science Foundation (NSF) Planning Institute for Artificial Intelligence in Physics, which supported this work, and members of Carnegie Mellons McWilliams Center for Cosmology.

The universe is the biggest data sets there is artificial intelligence is the key to understanding the universe and revealing new physics, said Scott Dodelson, professor and head of the department of physics at Carnegie Mellon University and director of the NSF Planning Institute. This research illustrates how the NSF Planning Institute for Artificial Intelligence will advance physics through artificial intelligence, machine learning, statistics, and data science.

Its clear that AI is having a big effect on many areas of science, including physics and astronomy,said James Shank, a program director in NSFs Division of Physics. Our AI planning Institute program is working to push AI to accelerate discovery. This new result is a good example of how AI is transforming cosmology.

To create their new method, Ni and Li harnessed these fields to create a code that uses neural networks to predict how gravity moves dark matter around over time. The networks take training data, run calculations and compare the results to the expected outcome. With further training, the networks adapt and become more accurate.

The specific approach used by the researchers, called a generative adversarial network, pits two neural networks against each other. One network takes low-resolution simulations of the universe and uses them to generate high-resolution models. The other network tries to tell those simulations apart from ones made by conventional methods. Over time, both neural networks get better and better until, ultimately, the simulation generator wins out and creates fast simulations that look just like the slow conventional ones.

We couldnt get it to work for two years, Li said, and suddenly it started working. We got beautiful results that matched what we expected. We even did some blind tests ourselves, and most of us couldnt tell which one was real and which one was fake.

Despite only being trained using small areas of space, the neural networks accurately replicated the large-scale structures that only appear in enormous simulations.

The simulations didnt capture everything, though. Because they focused on dark matter and gravity, smaller-scale phenomena such as star formation, supernovae and the effects of black holes were left out. The researchers plan to extend their methods to include the forces responsible for such phenomena, and to run their neural networks on the fly alongside conventional simulations to improve accuracy.

Read AI Magic Just Removed One of the Biggest Roadblocks in Astrophysics for more on this research.

Reference: AI-assisted superresolution cosmological simulations by Yin Li, Yueying Ni, Rupert A. C. Croft, Tiziana Di Matteo, Simeon Bird and Yu Feng, 4 May 2021, Proceedings of the National Academy of Sciences.DOI: 10.1073/pnas.2022038118

The research was powered by the Frontera supercomputer at the Texas Advanced Computing Center (TACC), the fastest academic supercomputer in the world. The team is one of the largest users of this massive computing resource, which is funded by the NSF Office of Advanced Cyberinfrastructure.

This research was funded by the NSF, the NSF AI Institute: Physics of the Future and NASA.

Go here to see the original:

AI Technique Ushers In New Era of High-Resolution Simulations of the Universe - SciTechDaily

Posted in Ai | Comments Off on AI Technique Ushers In New Era of High-Resolution Simulations of the Universe – SciTechDaily

Clarius Introduces First Ultrasound System That Uses AI and Machine Learning to Recognize Anatomy for an Instant Window into the Body – PRNewswire

Posted: at 4:42 am

VANCOUVER, BC, May 19, 2021 /PRNewswire/ --In its biggest Clarius Ultrasound App update to date, Clarius Mobile Health is introducing the ability for its wireless ultrasound systems to automatically detect body anatomy being scanned by clinicians. This new feature is now available with the Clarius C3 HD multipurpose and the Clarius PA HD phased array ultrasound systems.

Ideally suited for emergency medicine, EMS, critical care and primary care, these high-definition scanners enable clinicians to quickly examine the abdomen, heart, lungs, bladder, and other superficial structures without additional interaction through the App. Users simply select Auto Preset AI and the Clarius App will automatically adjust settings to optimize imaging for the area being examined.

"Although machine learning and artificial intelligence have been applied to medical imaging over the past several years, this is the first commercially available application that enables an ultrasound system to recognize anatomy on a macro level, allowing the AI to recognize different structures in the human torso," says Kris Dickie, Vice President of Research and Development at Clarius. "We've labelled tens of thousands of ultrasound images within our vast database to achieve this exciting breakthrough, which will help clinicians to get the answers they need more rapidly."

In addition to Auto Preset AI, Version 8.0 of the Clarius Ultrasound App includes dozens of new features and enhancements, most of which are available across the entire Clarius product line. Clinicians across the medical spectrum can choose from ten wireless ultrasound scanners that are operated by the Clarius Ultrasound App, which can be downloaded from the App Store or Google Play store. The App is compatible with most iOS and Android smart devices for high-definition imaging. Always free, the Clarius Ultrasound App 8.0 offers many different capabilities for novice and expert users.

Enabling Ultrasound Mastery

Dr. Oron Frenkel, an emergency physician and Chairman of the Clarius Medical Advisory Board, is dedicated to expanding the use of point-of-care (POCUS) ultrasound. He works closely with Clarius on ultrasound education and developing features that help clinicians master ultrasound imaging.

"Ultrasound is an amazing tool that gives those of us who know how to use it an instant window into the patient's body," says Dr. Frenkel. "I'm excited about the many features in this Clarius Ultrasound App update that will help enhance ultrasound proficiency. Besides the Auto Preset AI, which will set up novice users for success from day one, we now have nearly 100 ultrasound tutorials that can be viewed in-app. Through this integration, users can easily toggle between watching the video and scanning their patient. Clarius Classroom provides an excellent way to learn."

Anatomical Photographs and New Ways to Share

Also new in the latest Clarius Ultrasound App is the ability for clinicians to capture and document photographs, taken with the mobile device camera, alongside the ultrasound images. This is an excellent way to provide context for education, reporting and patient information. Users can also share interesting cases more easily to their social networks for commentary all images and clips remain anonymous to protect patient identity. The new sharing functionality allows users to take advantage of native mobile device integrations such as Apple's AirDrop.

Enhanced Workflows and Imaging

Since 2016, Clarius ultrasound scanners have gained a reputation for delivering high-resolution imaging comparable to high performance laptop systems, at a fraction of the cost. Among other enhancements, the new Clarius Ultrasound App offers advanced workflow features that include a TI-RADS reporting module, Lower Extremities Doppler packages, as well as a Labour and Delivery workflow that includes Biophysical Profile reporting. Additional advanced imaging features now include a Dynamic Range control, High Frame Rate Carotid Doppler imaging, and High-Definition Zoom capabilities.

Accurate, easy-to-use and affordable ultrasound imaging is here. Unlike alternatives, Clarius offers advanced innovation in-app, Clarius Cloud storage/management, Clarius Live telemedicine and Clarius Classroom at no additional cost, with zero subscription fees. Clinicians are invited to book a demo with a Clarius sonographer to see the difference high-definition imaging can make in delivering the best patient care.

About Clarius Mobile Health

Clarius is on a mission to make accurate, easy-to-use and affordable ultrasound tools available to all medical professionals in every specialty. With decades of experience in medical imaging, the team knows that great ultrasound imaging improves confidence and patient care. Today, Clarius handheld wireless ultrasound scanners connect to iOS and Android devices, delivering high-resolution ultrasound images traditionally only available with bulkier, high-end systems at a fraction of the cost.

More than one million high-definition scans have been performed using Clarius wireless handheld scanners. Clarius scanners are available in over 90 countries worldwide.

Learn more at http://www.clarius.com.

Media Contact:Gense CastonguayMarketing Vice PresidentPhone: +1 (866) 657-9243 ext. 221 | Direct: +1 (604) 260-7077[emailprotected]

SOURCE Clarius Mobile Health

Home

Follow this link:

Clarius Introduces First Ultrasound System That Uses AI and Machine Learning to Recognize Anatomy for an Instant Window into the Body - PRNewswire

Posted in Ai | Comments Off on Clarius Introduces First Ultrasound System That Uses AI and Machine Learning to Recognize Anatomy for an Instant Window into the Body – PRNewswire

Forecast nabs $19M for its AI-based approach to project management and resource planning – TechCrunch

Posted: at 4:42 am

Project management has long been a people-led aspect of the workplace, but that has slowly been changing. Trends in automation, big data and AI have not only ushered in a new wave of project management applications, but they have led to a stronger culture of people willing to use them. Today, one of the startups building a platform for the next generation of project management is announcing some funding a sign of the traction its getting in the market.

Forecast, a platform and startup of the same name that uses AI to help with project management and resource planning put simply, it uses artificial intelligence to both read and integrate data from different enterprise applications in order to build a bigger picture of the project and potential outcomes has raised $19 million to continue building out its business.

The company plans to use some of the funding to expand to the U.S., and some to continue building out its platform and business, headquartered in London with a development office also in Copenhagen.

This funding, a Series A, comes less than a year after the startups commercial launch, and it was led byBalderton Capital, with previous investors Crane Ventures Partners, SEED Capital and Heartcore also participating.

Forecast closed a seed round in November 2019 and then launched just as the pandemic was kicking off. It was a time when some projects were indeed put on ice, but others that went ahead did so with more caution on all sorts of fronts financial, organizational and technical. It turned out to be a right place, right time moment for Forecast, a tool that plays directly into providing a technical platform to manage all of that in a better way, and it tripled revenues during the year. Its customers include the likes of the NHS, the Red Cross, Etain and more. It says over 150,000 projects have been created and run through its platform to date.

Project management the process of planning what you need to do, assigning resources to the task and tracking how well all of that actually goes to plan has long been stuck between a rock and a hard place in the world of work.

It can be essential to getting things done, especially when there are multiple departments or stakeholders involved; yet its forever an inexact science that often does not reflect all the complexities of an actual project, and therefore may not be as useful as it could or should be.

This was a predicament that founder and CEO Dennis Kayser knew all too well, having been an engineer and technical lead on a number of big projects himself. His pedigree is an interesting one: One of his early jobs was as a developer at Varien, where he built the first version of Magento. (The company was eventually rebranded as Magento and then acquired by eBay, then spun out, then acquired again, this time by Adobe for nearly $1.7 billion, and is now a huge player in the world of e-commerce tools.) He also spent years as a consultant at IBM, where among other things he helped build and formulate the first versions of ikea.com.

In those and other projects, he saw the pitfalls of project management not done right not just in terms of having the right people on a project at the right time, but the resource planning needed, better calculations of financial outcomes in the event of a decision going one way or the other, and so on.

He didnt say this outright, but Im sure one of the points of contention was the fact that the first ikea.com site didnt actually have any e-commerce in it, just a virtual window display of sorts. That was because Ikea wanted to keep people shopping in its stores, away from the efficiency of just buying the one thing you actually need and not the 10 you do not. Yes, there are plenty of ways now of recirculating people to buy more when you select one item for a shopping cart something the likes of Amazon has totally mastered but this was years ago when there was still even more opportunities for innovation than there are now. All of this is to say that you might very reasonably argue that had there been better project managing and resource planning tools to give forecasts of potential outcomes of one or another route taken, people advocating for a different approach could have made their case better. And maybe Ikea would have jumped on board with digital commerce far sooner than it did.

Typically you get a lot of spreadsheets, people scattered across different tools that include accounting, CRM, Gitlab and more, Kayser said.

That became the impetus for trying to build something that can take all of that into account and make a project management tool that rather than just being a way of accounting to a higher-up, or reflecting only what someone can be bothered to update in the system something that can help a team.

Connecting everything into our engine, we leverage data to understand what they are working on and what is the right thing to be working on, what the finances are looking like, he continued. So if you work in product, you can plan out who is where, and what resourcing you need, what kind of people and skills you require. This is a more dynamic progression of some of the other newer tools that are being used for project management today, targeting, in his words, people who graduate from Monday and Asana who need something more robust, either because they have too many people working on a project or because its too complicated, there is just too much stuff to handle.

More legacy tools he said that are used include Oracle to some degree and Mavenlink, which he describes as possibly Forecasts closest competitor, but its platform is aging.

Currently the Forecast platform has some 26 integrations of popular tools used for projects to produce its insights and intelligence, including Salesforce, Gitlab, Google Calendar, and, as it happens, Asana. But given how fragmented the market is, and the signals one might gain from any number of other resources and apps, I suspect that this list will grow as and when its customers need more supported, or Forecast works out what can be gleaned from different places to paint an even more accurate picture.

The result may not ever replace an actual human project manager, but certainly starts to then look like a digital twin (a phrase I have been hearing more and more these days) that will definitely help that person, and the rest of the team, work in a smarter way.

We are really excited to be an early investor in Forecast, said James Wise, a partner at Balderton Capital, in a statement. We share their belief that the next generation of SaaS products will be more than just collaboration tools, but use machine learning to actively solve problems for their users. The feedback we got from Forecasts customers was quite incredible, both in their praise for the platform and in how much of a difference it had already made to their operations. We look forward to supporting the company to scale this impact going forward.

Read the original post:

Forecast nabs $19M for its AI-based approach to project management and resource planning - TechCrunch

Posted in Ai | Comments Off on Forecast nabs $19M for its AI-based approach to project management and resource planning – TechCrunch

The disinformation threat from text-generating AI – Axios

Posted: at 4:42 am

A new report lays out the ways that cutting-edge text-generating AI models could be used to aid disinformation campaigns.

Why it matters: In the wrong hands text-generating systems could be used to scale up state-sponsored disinformation efforts and humans would struggle to know when they're being lied to.

How it works: Text-generating models like OpenAI's leading GPT-3 are trained on vast volumes of internet data, and learn to write eerily life-like text off human prompts.

What they found: While "no currently existing autonomous system could replace the entirety of the IRA," algorithmically based tech paired with experienced human operators produces results that are nothing less than frightening.

What to watch: While OpenAI has tightly restricted access to GPT-3, Buchanan notes that it's "likely that open source versions of GPT-3 will eventually emerge, greatly complicating any efforts to lock the technology down."

The bottom line: Like much of social media more broadly, the report's authors write that systems like GPT-3 seem "more adept as fabulists than as staid truth-tellers."

Continue reading here:

The disinformation threat from text-generating AI - Axios

Posted in Ai | Comments Off on The disinformation threat from text-generating AI – Axios

4 AI Trends that will Define the Future of Data Science – Analytics Insight

Posted: at 4:42 am

Prepare your AI ecosystem to match with the data challenges of the future

Companies across the world are increasingly adopting AI for their smooth business operations. The technology unleashed its constructive potential during the onset of COVID-19 in performing a wide range of tasks that are complex and cumbersome for humans, bolstering employee productivity. Right from managing tasks ranging from planning, envisaging, and predictive maintenance to customer service chatbots, aiding data analytics, and more, businesses are extracting the maximum out of this disruptive technology.

AI is one of the most revolutionizing technologies of our time. The current surge in AI research and investment has resulted in an incredible rise in AI applications. These applications do not just promise to yield better business outcomes but enhance the human experience as a whole. The technology is currently being applied for a wide array of industries ranging from healthcare, retail, and banking, to logistics, and transportation. While these industries are using AI to automate their processes and sort out their analytics processes, it is now time to think about the future possibilities with artificial intelligence.

The rate at which technology is developing is beyond measure and the same is the case with how industries are taking advantage of it in terms of managing data. The road AI is heading towards features a vast AI ecosystem with several models and new dependencies. The tech world will witness new approaches to skills, governance, and machine learning engineering where data scientists and software engineers will collaborate to leverage machine learning.

So, what should organizations expect in the future? After all, the success of an organizations AI adoption will depend on how they master the complexity of altering their business processes to accommodate the new change. Here are the four AI trends organizations should bear in mind.

1. Upgrade first, create later.

Instead of being in a hurry to create an AI model, optimize and update the existing models that are put in place. As every industrys challenges and data requirements are different, AI models should be upgraded to suit the domain specifications and for that, data scientists with experience in the specific industry and scientific techniques should be on your radar.

2. Transfer learning will scale NLP

Natural language processing will witness a massive growth in adoption along with increased potential due to transfer learning. Knowledge obtained after solving a problem will be stored and automatically applied to related problems, saving time for newer applications.

3. Governance will come crucial

As newer predictive models will flood the markets, managing them all will become difficult. Only with proper governance, frameworks, and guidelines, organizations can govern the machine-generated data. Proper governance should follow all the ethical standards, which is why organizations should relook the roles and responsibilities of data scientists.

4. Polish Existing Talent

As AI advances, organizations would want to look for greater AI literacy and awareness at all levels. As the business world is getting more data-driven, organizations will only be able to make the most of the technology if all the employees understand at least the basics of AI and data science. Hiring new talent altogether for this purpose will be tedious, hence organizations should train and polish the skills of the existing employees and prepare them with the fundamentals of what is essential, AI and data science.

AI has already made tremendous strides when it comes to leveraging data science and automation. The algorithms will only become more complex and exceed human abilities in the foreseeable future. There to manage these advances, organizations should start preparing and strategizing now before its too late to catch up.

Share This ArticleDo the sharing thingy

Read the rest here:

4 AI Trends that will Define the Future of Data Science - Analytics Insight

Posted in Ai | Comments Off on 4 AI Trends that will Define the Future of Data Science – Analytics Insight

Investing in AI for Good – Stanford Social Innovation Review

Posted: May 11, 2021 at 11:05 pm

IDinsight enumerators demarcate an area of a fieldtoestimate agricultural yield in Telangana, India. Data from surveys like this are a critical input into agricultural machine-learning models.

In the past 10 years, hundreds of projects have applied artificial intelligence (AI) to creating social good. The right tool applied to an appropriate problem has the potential to drastically improve millions of lives through better service delivery and better-informed policy design. But what kind of investments do AI solutions need to be successful, and which applications have the most potential for social impact?

AI excels at helping humans harness large-scale or complex data to predict, categorize, or optimize at a scale and speed beyond human ability. We believe that more targeted, sustained investments in AI for social impact (sometimes called AI for good)rather than multiple, short-term grants across a variety of areasare important for two reasons. First, AI often has large upfront costs and low ongoing or marginal costs. AI systems can be hard to design and operationalize, and they require an array of potentially costly resourcessuch as training data, staff time, and high-quality data infrastructureto get off the ground. Compared to the upfront investment, the cost of reaching each additional user is small. For philanthropies looking to drive positive social impact via AI, this often means that AI solutions must reach significant scale before they can offer a substantial social return on investment.

Another reason why targeted, sustained funding is important is because any single point of failurelack of training data, misunderstanding users' needs, biased results, technology poorly designed for unreliable Internetcan hobble a promising AI-for-good product. Teams using AI need to continually refine and maintain these systems to overcome obstacles, achieve scale, and maintain the ecosystems in which they live.

To narrow in on AI use cases that offer the most promise, our team at IDinsight synthesized existing research from the United Nations, McKinsey and Company, nonprofit practitioners, past Google.org work, and other groups in the social sector. From there, we identified about 120 use cases across 30 areas where developers are using AI to address social and environmental problems.

Using a detailed framework, our team then analyzed which of these areas will most likely lead to significant social impact. In addition to potential risks, this framework looks at:

As we looked through the use cases that scored highest against our framework, three criterialarge impact potential (depth and breadth), differential impact compared to non-AI tools, and a clear pathway to scalestood out as useful shorthand to explain why certain areas are uniquely primed for investment. We also considered whether each area had sufficient proof-of-concept evidence illustrating its feasibility, as well as manageable risks that investment and careful modeling can safely overcome. (The full framework outlines a process for more robust and precise analysis.)

Our analysis pinpointed three specific areas that appear optimal for near-term investment: medical diagnostic tools, communication support for marginalized communities and languages, and agricultural yield prediction. Its important to note that these are not the only areas that AI could drive significant social good. Other areas we analyzed that scored well against our framework included medical research/drug discovery, natural disaster response, supply chain forecasting, and combatting misinformation. While we dont detail these areas here, we encourage others to explore them. Heres a closer look at our top three areas:

In some low- and middle-income countries (LMICs), where the health-care provider to patient ratio is low, many patients fall through cracks. Under-diagnosis or misdiagnosis of dangerous conditions is common due to traditional tests that are expensive or unavailable due to laboratory requirements; time-intensive testing, which overburdened workers may not conduct; and the stigmatization of certain health conditions, which dissuades many patients from getting tested in public clinics.

Moreover, health-care workers often need more training than they receive to accurately diagnose and treat health conditions. Poor diagnostics seem to greatly constrain the improvement of health-care outcomes in low-resource settings. For example, when the Center for Global Development simulated theoretical improvements to maternal and child health outcomes in sub-Saharan Africa under optimized clinical conditions (no shortage of drugs or absent health-care workers), health care quality only marginally improved.

There is a strong case for investing in AI tools that diagnose or screen for common conditions at the point of care. Many of these tools are already at the proof-of-concept stage and work with smartphone cameras or microphones to capture sounds, images, or video that could aid diagnosis. And while smartphone penetration among frontline health workers in LMICs is low (with significant variance across countries), its expected to grow rapidly in the next few years.AI diagnostic tools have:

The most impactful diagnostic tools screen for underdiagnosed or misdiagnosed, treatable conditions that affect lots of people. AI tools for diagnosing many of these conditionssuch as respiratory conditions like tuberculosis or asthma, malnutrition (including infant anthropometrics), anemia, and cervical canceralready have promising proofs of concepts. However, developers still need to validate and adapt these technologies so that they are practically useful for health workers.

Funders should also consider ecosystem investments that enable the creation of equitable AI toolsfor example, training datasets that are accurate, representative of the populations they would serve, and collected with informed consent. Privacy platforms, where health-care organizations can securely store and share training data, are another type of valuable ecosystem investment. (Nightingale Open Science is building a platform to do this for some health conditions like cancer and cardiac arrest). These investments can make a significant difference in how well AI tools serve the populations they seek to reach and shouldnt be overlooked.

As with any medical device, global health groups and regulatory agencies need to guarantee that AI tools meet common quality standards. They must ensure that algorithms are trained on representative data and are rigorously evaluated for fairness in the settings where they will be deployed. This is particularly important given that many health-care AI proofs of concept are built on non-representative data or data collected in laboratory settings, not in real-world contexts. If we are to realize game-changing advances and guard against potential risks, its important that philanthropies invest in correcting for these shortcomings.

Millions of people around the world are excluded from public services, education, the job market, and the Internet at large by virtue of their inability to speak majority languages. Just 10 of the 6,000 languages used in the world today make up about 87.3 percent of all online content. More than half of the content on the Internet is in English, and even some of the most commonly spoken languages in the world (including Arabic, Hindi, Bengali) dont make the top 10.

Language barriers can cause extreme, acute harm during legal proceedings, medical visits, and humanitarian emergencies. Hospitals, social service agencies, immigration lawyers, schools, and natural disaster response systems use translators to provide services, but in most cases, too few translators are available to meet translation needs globally. And while translation and automated speech recognition models have made tremendous headway for majority languagesone of Google.orgs AI Impact Challenge grantees, TalkingPoints, for example, helps non-English speaking parents in the United States communicate with their childrens teacherssupport for minority languages needs more investment.

Innovation in this space can take many different forms. One is datasets and tools that make machine translation available for more language pairs, such as the translation of Bhojpuri to English. Another form is improved translation for specific subdomains in existing machine-translatable languages, such as the improved translation of French or Arabic medical terms. Innovation can also happen with tools that extend beyond translation to enhance the usability of common, natural language processing tools in multiple languages, such as sentiment analysis tools. Each of these is primed for investment because they have:

One promising opportunity for investment is improving general translation services for languages that many people speak but that are underrepresented in existing translation models. Many languages with millions of native speakers dont have access to translation for their languages on common platforms. Wired Magazine noted several of the biggest in a 2018 article: Bhojpuri (51 million people), Fula (24 million), Sylheti (11 million), Quechua (9 million), and Kirundi (9 million). Even within existing languages, general translation quality varies substantially.

Another opportunity lies in domain-specific translation improvementsthat is, improvements to translation models for specific contexts. These models require accurate machine comprehension of jargon that may not be common in traditional, natural language data and could be most helpful in settings where translation heavily impacts individuals, such as helping migrants understand legal barriers when entering a new country or disenfranchised people navigate standardized government processes. It will be important to balance greater access for people with the potential risk of inaccurate translation.

Finally, most language models are based on convenience samples of data that happen to be available on the Internet, which can exacerbate biases. Its imperative that any large-scale investment in under-resourced language data is done in partnership with native speakers and that members of civil society help guide which texts to use for model training. The representativeness and accuracy of AI translation and communication models depend on it.

One difficult but essential factor contributing to sustainable food systems is accurate and timely estimates about agriculture yields. These estimates are extremely important to making informed policy decisions that provide farmers with the support they need and ensuring that millions of people have access to food.

In affluent countries, satellite-based, yield-prediction algorithmstrained on administrative and farm-reported data about land use, plot boundary demarcations, and planting timelinesprovide these estimates at periodic intervals throughout the growing season. This allows farmers to make better planning decisions. The algorithms help them get the right agricultural inputs (including hybrid seeds, fertilizer, and pesticides) to the right fields at the right time, and allows governments to more nimbly respond to shocks such as droughts and disease.

But in LMICs, where smallholder farms dominate agriculture, yield prediction isn't straightforward. Smallholders' plots are small, irregularly shaped, and frequently have more than one crop, making them difficult to identify or classify in remote-sensing imagery. Analog alternatives such as using crop-cut experiments are expensive, slow, and fraught with measurement challenges at scale. As such, farmers and government policy makers often make decisions without critical information on the state of agriculture.

Today, satellite imagery is increasingly available to the public, and has the high resolution and update frequency required to make predictions at the smallholder level. For example, the Sentinel-II satellite collects imagery for nearly the entire planet every five days at about 10 meter-per-pixel resolution. Beyond the satellite imagery, acquiring the training data to build AI models requires substantial, ground-level data collection upfronta labor- and time-intensive prospect involving farm visits and crop cuts in remote, rural areas.

High-quality research has nevertheless demonstrated the feasibility of using satellite imagery to estimate yields of smallholder farmers using publicly available imagery. These proofs-of-concept are generally limited to specific crops in specific regions, but with greater training data and model-building efforts, they could have:

Example investment opportunities include programs that recommend tailored agricultural inputs or inform macro-level government agriculture policy to increase or decrease food imports. As in other applications, the lack of training data is a major constraint to conducting yield prediction at scale in LMICs. With many different organizations and researchers working on related problems, theres a need for collection and labeling standards. Initiatives like the ML Hub by the Radiant Earth Foundation will be important in hosting and sharing the data that all model builders need to create the next generation of AI-based yield forecasting models.

In addition, as training data from crop cuts becomes more widely available, funding the creation of pre-trained algorithms that perform reasonably well off the shelf for common crops will be valuable. Similar to Googles BERT for natural language processing or VGG19 for image classification, pre-trained, open-source models can help data scientists focus on tweaking high-performing models to their use case, rather than starting from scratch. With proactive philanthropic investment, funders can insist on better, more representative training data and pre-trained models that are built with the needs of a diverse array of small-holder farmers in mind.

We offer this framework and analysis as a conversation starter, rather than a final verdict. AI holds tremendous promise to improve millions of lives around the world by proving the tools to directly combat health, communication, economic challenges. Investing in solutions that address large-scale social problems, tap into the unique comparative advantages of AI, and have clear pathways to scale is a good place to startthough they may require patient capital. Developing useful, scalable AI tools is hard and requires a sustained commitment to building datasets, systems, and user-centric applications that can help solve societal challenges. By staying the course and seeing promising technological innovations through to scale, investors can unlock inordinate social value.

More:

Investing in AI for Good - Stanford Social Innovation Review

Posted in Ai | Comments Off on Investing in AI for Good – Stanford Social Innovation Review

‘Nurses Are Essential’ to AI Integration in Healthcare – HealthTech Magazine

Posted: at 11:04 pm

For Dr. Erich Huang, Duke Healths chief data officer for quality, one issue often overlooked when discussing AI in healthcare is the importance of the user experience.

Its not just an abstract Westworld brain sitting out there, Huang says. It has to be well integrated with clinical workflow, and nurses are essential to that.

With the Sepsis Watch early warning program, Huang says, nurses were able to apply their professional experience to kick off the cascade of actions that would follow an AI-produced alert.

One of the big issues with electronic health records is fatigue from alerts, he says. If you have a human intermediary who can serve as a first line, thats an important component, because we can then triage things appropriately.

Huang adds that hed like to see more nurse-initiated thinking about automated processes that would make nurses jobs easier.

Id like to hear nursing staff identify inefficiencies they deal with, and think about the things AI and ML would be helpful in improving, allowing them to spend more time with their patients, he says. All clinical staff really need to be well integrated into the development or selection of these AI-based apps.

DISCOVER:Learn how to bridge the gap between nurses and IT teams.

Nurses must ensure that advanced technologies such as AI dont cause harm or compromise the nature of human interactions and relationships that are central to their job, says Liz Stokes, director of the American Nurses Associations Center for Ethics and Human Rights.

Nurses must also be sensitive to unintended consequences related to the development and use of AI technologies, Stokes says. As with any other advanced technology in practice, nurses should ensure that the AI being used is not biased, and they must express their concerns if there is potential or actual bias that is occurring.

Though AI can produce efficiencies in processes, such as prediction and diagnosis, Stokes says it has the potential to lower efficiency and increase stress and burnout if the cognitive demand on clinical teams is higher.

Adequate training and education for nurses is imperative, Stokes adds, and IT leaders need to involve nurses during the development and consideration of AI-related technologies. Those leaders should also collaborate with nurse informaticists, ethicists, engineers and other stakeholders when AI implementation is considered.

Visit link:

'Nurses Are Essential' to AI Integration in Healthcare - HealthTech Magazine

Posted in Ai | Comments Off on ‘Nurses Are Essential’ to AI Integration in Healthcare – HealthTech Magazine

IBM Think 2021 kicks off with AI innovations and some interesting quantum news – The Next Web

Posted: at 11:04 pm

IBM today kicked off its annual THINK conference with a hefty dose of AI news and some tantalizing tidbits about the companys current quantum computing endeavors.

Weve got the skinny, but theres a lot to get through so strap in and get comfy.

AutoSQL and Cloud Pak for Data: IBMs touting a breakthrough in cloud-based database management. Basically, where businesses serve up answers to customer queries using cloud-managed AI databases, this will significantly speed things up.

According to IBM, the new system gives answers to distributed queries as much as 8x faster than previously and at nearly half the cost of other compared data warehouses.

Per an IBM press release:

With the launch of AutoSQL, IBM Cloud Pak for Data now includes the highest-performing cloud data warehouse on the market (based on our benchmarking study) that can run seamlessly across any hybrid multi-cloud environment including private clouds, on-premises and any public cloud.

Quick take: Its tempting to call this a bit hyperbolic, but IBMs brought receipts in the form of internal benchmarking. Anything that speeds up customer-facing AI is a boon for businesses and the people who use their products. Get more info here.

Watson Orchestrate: The no code AI paradigm is picking up steam and this is a great example of how that can be useful. Orchestrate is an AI system designed to augment workflows for individuals.

According to IBM, its meant to be interactive and easy to use:

Requiring no IT skills to use, Watson Orchestrate enables professionals to initiate work in a very human way, using collaboration tools such as Slack and email in natural language. It also connects to popular business applications like Salesforce, SAP and Workday.

Quick take: I hate virtual assistants because theyre virtually useless. But this is integrated, not an externaltalking bot, so it looks like something that could legitimately accelerate workflows for people who tend to have a lot going on. Theres more info available here on IBMs website.

Maximo Mobile: IBM recently launched this new mobile asset management platform for workers tied to infrastructure-scale jobs such as electric company employees or maintenance crews who work on bridges and roads.

Quick take: Ever wonder why it takes so long for the power to come back on after something goes wrong? According to that video, when it comes to the people who maintain large assets, fix our powerlines, and operate refineries, as much as 15-20% of a technicians time can be spent on paperwork.

Thats ridiculous!

Maximo Mobile is IBMs solution to the data and asset management issues these large-scale operations face in the field.

Mono2Micro: A common problem for businesses is figuring out how to get legacy applications into new-fangled AI systems.

Per IBM:

Mono2Micro uses AI developed by IBM Research to analyze large enterprise applications and provide recommendations on how to best adapt them for the move to cloud.

Quick take: This is simple, but brilliant. Basically, when it comes to integrating legacy AI applications into hybrid-cloud environments, the only option used to be manually changing the code. Now, with Mono2Micro, that process can be automated. This could make it more cost-effective to port your old apps than it is to build something new from the ground up. Check out more info here.

IBM also announced several new initiatives and more information on its $1 billion investment in its partner ecosystem, but most of these announcements were news wed heard before.

[Read:3 new technologies ecommerce brands can use to connect better with customers]

Where things got real interesting is when IBM announced a new quantum computing breakthrough.

Qiskit software boosts: IBM today announced a 120X increase in quantum circuit processing speed thanks to IBMs hyrbid-cloud solution.

Instead of storing data on the physical quantum computer and thus necessitating more complex architecture and power requirements IBMs keeping things hybrid by enabling high-speed cloud-based data transfer via its Qiskit runtime.

Per IBM:

By introducing Qiskit Runtime, IBM is enabling quantum systems to run complex calculations such as chemical modeling and financial risk analysis in hours, instead of several weeks. To show the power of the software, IBM recently demonstrated how the lithium hydride molecule (LiH) could be modeled on a quantum device in nine hours, when previously it took 45 days.

Quick take: If were ever going to have a useful quantum computer, we need to scale the experimental builds were currently working with. IBMs new Quiskit Runtime service offloads a portion of the process to the cloud so the quantum part of the computer can do what it does unfettered.

Itll be a while before we see exactly what this means, but its reason for optimism in a field that already looks pretty bright. You can learn more here.

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to itright here.

See the original post here:

IBM Think 2021 kicks off with AI innovations and some interesting quantum news - The Next Web

Posted in Ai | Comments Off on IBM Think 2021 kicks off with AI innovations and some interesting quantum news – The Next Web

Page 133«..1020..132133134135..140150..»