Page 178«..1020..177178179180..190200..»

Category Archives: Ai

Ada Health built an AI-driven startup by moving slowly and not breaking things – TechCrunch

Posted: March 5, 2020 at 6:24 pm

When Ada Health was founded nine years ago, hardly anyone was talking about combining artificial intelligence and physician care outside of a handful of futurists.

But the chatbot boom gave way to a powerful combination of AI-augmented health care which others, like Babylon Health in 2013 and KRY in 2015, also capitalized on. The journey Ada was about to take was not an obvious one, so I spoke to Dr. Claire Novorol, Adas co-founder and chief medical officer, at the Slush conference last year to unpack their process and strategy.

Co-founded with Daniel Nathrath and Dr. Martin Hirsch, the startup initially set out to be an assistant to doctors rather than something that would have a consumer interface. At the beginning, Novorol said they did not talk about what they were building as an AI so much as it was pure machine learning.

Years later, Ada is a free app, and just like the average chatbot, it asks a series of questions and employs an algorithm to make an initial health assessment. It then proposes next steps, such as making an appointment with a doctor or going to an emergency room. But Adas business model is not to supplant doctors but to create partnerships with healthcare providers and encourage patients to use it as an early screening system.

It was Novorol who convinced the company to pivot from creating tools for doctors into a patient-facing app that could save physicians time by providing patients with an initial diagnosis. Since the app launched in 2016, Ada has gone on to raise $69.3 million. In contrast, Babylon Health has raised $635.3 million, while KRY has raised $243.6 million. Ada claims to be the top medical app in 130 countries to date and has completed more than 15 million assessments to date.

Excerpt from:

Ada Health built an AI-driven startup by moving slowly and not breaking things - TechCrunch

Posted in Ai | Comments Off on Ada Health built an AI-driven startup by moving slowly and not breaking things – TechCrunch

Facebooks new AI-powered moderation tool helps it catch billions of fake accounts – The Verge

Posted: at 6:24 pm

Facebook is opening up about the behind-the-scenes tools it uses to combat fake account creation on its platforms, and the company says it has a new artificial intelligence-powered method known as Deep Entity Classification (DEC) thats proved especially effective.

DEC is a machine learning model that doesnt just take into account the activity of the suspect account, but it also evaluates all of the surrounding information, including the behaviors of the accounts and pages the suspect account interacts with. Facebook says its reduced the estimated volume of spam and scam accounts by 27 percent.

So far, DEC has helped Facebook thwart more than 6.5 billion fake accounts that scammers and other malicious actors created or tried to create last year. A vast majority of those accounts are actually caught in the account creation process, and even those that do get through tend to get discovered by Facebooks automated systems before they are ever reported by a real user.

Still, Facebook estimates that around 5 percent of all 2.89 billion monthly active users currently on the platform are fake accounts belonging to what Facebook considers violators of its terms of service. That typically means scammers, spammers, and people attempting to phish vulnerable users or use other methods of securing sensitive personal information for some sort of financial or identity theft scheme.

Thats where DEC comes in. It takes a sophisticated and holistic approach to analyzing user behavior that takes in around 20,000 features per profile; for instance, itll take into account the friending activity of an account the suspicious and potentially fake account sent a friend request to, and not just the suspicious account itself. The goal is to combat the ways malicious actors replicate genuine behavior. Over time, Facebook says the savvy spammers will get better and better at pretending to be real users, at least in the way Facebooks automated systems view one. DEC is supposed to take advantage of that by looking deeper into how the accounts that account interacts with behave on the platform, too.

This will be vitally important to Facebook as the 2020 US presidential election approaches. Promoting spam and trying to scam users are just one facet of the fake account problem on Facebook. The company has already acknowledged foreign operations from Iran, Russia, and elsewhere dedicated to using social platforms to influence news narratives, voting behavior, and other integral election matters. And those operations are getting more sophisticated as time goes on.

Last year, Facebook and Twitter shut down a sprawling network of fake accounts pushing pro-Trump messaging that used AI tools to generate real-looking profile photos. These werent scraped photos, but generated ones using neural networks, making them harder to flag as fake. Its these kinds of methods that will keep Facebook hard at work trying to stay one step ahead, and the company is acknowledging that its DEC approach will need to be continually reworked to ensure that it can remain effective against the spammers ever-changing strategies.

Continued here:

Facebooks new AI-powered moderation tool helps it catch billions of fake accounts - The Verge

Posted in Ai | Comments Off on Facebooks new AI-powered moderation tool helps it catch billions of fake accounts – The Verge

App, AI Work Together to Provide Rapid At-Home Assessment of Coronavirus Risk – Global Health News Wire

Posted: at 6:24 pm

A coronavirus app coupled with machine intelligence will soon enable an individual to get an at-home risk assessment based on how they feel and where theyve been in about a minute, and direct those deemed at risk to the nearest definitive testing facility, investigators say.

A coronavirus app coupled with machine intelligence will soon enable an individual to get an at-home risk assessment based on how they feel and where theyve been in about a minute, and direct those deemed at risk to the nearest definitive testing facility, investigators say.

It will also help provide local and public health officials with real time information on emerging demographics of those most at risk for coronavirus so they can better target prevention and treatment initiatives, the Medical College of Georgia investigators report in the journalInfection Control & Hospital Epidemiology.

We wanted to help identify people who are at high risk for coronavirus, help expedite their access to screening and to medical care and reduce spread of this infectious disease, says Dr. Arni S.R. Srinivasa Rao, director of the Laboratory for Theory and Mathematical Modeling in the MCG Division of Infectious Diseases at Augusta University and the studys corresponding author.

Rao and co-author Dr. Jose Vazquez, chief of the MCG Division of Infectious Diseases, are working with developers to finalize the app which should be available within a few weeks and will be free because it addresses a public health concern.

The app will ask individuals where they live; other demographics like gender, age and race; and about recent contact with an individual known to have coronavirus or who has traveled to areas, like Italy and China, with a relatively high incidence of the viral infection in the last 14 days.

It will also ask about common symptoms of infection and their duration including fever, cough, shortness of breath, fatigue, sputum production, headache, diarrhea and pneumonia. It will also enable collection of similar information for those who live with the individual but who cannot fill out their own survey.

Artificial intelligence will then use an algorithm Rao developed to rapidly assess the individuals information, send them a risk assessment no risk, minimal risk, moderate or high risk and alert the nearest facility with testing ability that a health check is likely needed. If the patient is unable to travel, the nearest facility will be notified of the need for a mobile health check and possible remote testing.

The collective information of many individuals will aid rapid and accurate identification of geographic regions, including cities, counties, towns and villages, where the virus is circulating, and the relative risk in that region so health care facilities and providers can better prepare resources that may be needed, Rao says. It also will help investigators learn more about how the virus is spreading, the investigators say.

Once the app is ready, it will live on the augusta.edu domain and likely in app stores on the iOS and Android platforms.

It is imperative that we evaluate novel models in an attempt to control the rapidly spreading virus, Rao and Vazquez write.

Technology can assist faster identification of possible cases and aid timely intervention, they say, noting the coronavirus app could be easily adapted for other infectious diseases. The accessibility and rapidity of the app coupled with machine intelligence means it also could be utilized for screening wherever large crowds gather, such as major sporting events.

While symptoms like fever and cough are a wide net, they are needed in order to not miss patients, Vazquez notes.

We are trying to decrease the exposure of people who are sick to people who are not sick, says Vazquez. We also want to ensure that people who are infected get a definitive diagnosis and get the supportive care they may need, he says.

While stressing that the infection with coronavirus is not a pandemic defined by the World Health Organization, as the worldwide spread of a new disease, including numerous flu pandemics like HINI, or swine flu, in which people find themselves exposed to a virus for which they have no immunity This is what you have to do with pandemics, says Vazquez. You dont want to expose an infected person to an uninfected person. If problems with infections persist and grow, drive-thru testing sites may be another need, he says.

The investigators hope this readily available method to assess an individuals risk will actually help quell any developing panic or undue concern over coronavirus, or COVID-19.

People will not have to wait for hospitals to screen them directly, says Rao. We want to simplify peoples lives and calm their concerns by getting information directly to them.

If concern about coronavirus prompted a lot of people to show up at hospitals, many of which already are at capacity with flu cases, it would further overwhelm those facilities and increase potential exposure for those who come, says Vazquez.

Tests for the coronavirus, which include a nostril and mouth swab and sputum analysis, are now being more widely distributed by the CDC, and the Food and Drug Administration also has given permission to some of the more sophisticated labs, particularly those at academic medical centers like Augusta University Medical Center, to use their own methods to look for signs of the viral infection, which the hospital will be pursuing.

As of this week, about 90,000 cases of coronavirus have been reported in 62 countries, with China having the most cases.

The CDC and WHO say that health care providers should obtain a detailed travel history of individuals being evaluated with fever and acute respiratory illness. They also have recommendations in place for how to prevent spread of the disease while treating patients.

Currently when people do present, for example, at the Emergency Department at AU Medical Center, with concerns about the virus, they are brought in by a separate entrance and escorted to a negative pressure room by employees dressed in hazmat suits per CDC protocols, Vazquez says. As of today, all those who have presented at AU Medical Center have tested negative, he says.

See more here:

App, AI Work Together to Provide Rapid At-Home Assessment of Coronavirus Risk - Global Health News Wire

Posted in Ai | Comments Off on App, AI Work Together to Provide Rapid At-Home Assessment of Coronavirus Risk – Global Health News Wire

This AI detects eye disease in newborn babies – The Next Web

Posted: at 6:24 pm

A new AI device can identify babies at risk of going blind by analyzing images of their eyes.

The system could help save the vision of babies born prematurely, who are particularly at risk of damage to their retinas, as the fragile vessels in their eyes can leak and grow abnormally. If this worsens, the retina can detach and cause loss of vision.

The National Eye Institute-funded study focused on a particularly dangerous form of this condition: aggressive posterior retinopathy of prematurity (AP-ROP).

This disease is difficult to detect as the symptoms can be very subtle. Clinicians try to find it by looking at images of an eyeballs interior lining known as the fundus but their diagnoses often differ.

Even the most highly experienced evaluators have been known to disagree about whether fundus images indicate AP-ROP, said J. Peter Campbell, the studys lead investigator.

His research team suspected AI could do a better job.

A previous study had already shown that deep learning could more accurately detect retinal damage than humans. But that system didnt focus on AP-ROP the most severe form of the condition.

The National Eye Institute study decided to investigate whether a similar approach would work with AP-ROP.

To do this, they tracked the development of 947 newborn babies over time, while the AI and human experts analyzed thousands of fundus images for signs of disease. The babies demographic data, comorbidities, and age since conception were all evaluated. Any correlations could suggest what causes the condition.

The babies demographic data, comorbidities, and age since conception were all evaluated. Any correlations could suggest what causes the condition.

[Read: MIT algorithm discovers antibiotic that can fight drug-resistant diseases]

The system was able to quantify specific symptoms of AP-ROP, such as the dilation and twists of the retinal vessels.

The results also created a quantifiable profile of AP-ROP patients. The infants who developed the condition were born lighter and younger than those that did not, and none of the babies born after 26 weeks developed the disease.

The researchers believe this will help identify at-risk babies more quickly, while also providing data that can improve understanding of AP-ROP.

And it may not be too long until the system is saving the vision of babies the Food and Drug Administration is fast-tracking the device for approval.

Published March 5, 2020 17:06 UTC

Visit link:

This AI detects eye disease in newborn babies - The Next Web

Posted in Ai | Comments Off on This AI detects eye disease in newborn babies – The Next Web

How people are using AI to detect and fight the coronavirus – VentureBeat

Posted: at 6:24 pm

The spread of the COVID-19 coronavirus is a fluid situation changing by the day, and even by the hour. The growing worldwide public health emergency is threatening lives, but its also impacting businesses and disrupting travel around the world. The OECD warns that coronavirus could cut global economic growth in half, and the Federal Reserve will cut the federal interest rates following the worst week for the stock market since 2008.

Just how the COVID-19 coronavirus will affect the way we live and work is unclear because its a novel disease spreading around the world for the first time, but it appears that AI may help fight the virus and its economic impact.

A World Health Organization report released last month said that AI and big data are a key part of the response to the disease in China. Here are some ways people are turning to machine learning solutions in particular to detect, or fight against, the COVID-19 coronavirus.

On February 19, the Danish company UVD Robots said it struck an agreement with Sunay Healthcare Supply to distribute its robots in China. UVDs robots rove around health care facilities spreading UV light to disinfect rooms contaminated with viruses or bacteria.

XAG Robot is also deploying disinfectant-spraying robots and drones in Guangzhou.

UC Berkeley robotics lab director and DexNet creator Ken Goldberg predicts that if the coronavirus becomes a pandemic, it may lead to the spread of more robots in more environments.

Robotic solutions to, for example, limit exposure of medical or service industry staff in hotels are deploying in some places today, but not every robot being rolled out is a winner.

The startup Promobot advertises itself as a service robot for business and recently showed off its robot in Times Square. The robot deploys no biometric or temperature analysis sensors. It just asks four questions in a screening, like Do you have a cough? It also requires people to touch a screen to register a response. A Gizmodo reporter who spoke to the bot called it dumb, but thats not even the worst part: Asking people in the midst of an outbreak soon to be declared a global pandemic to physically touch screens seems awfully counterproductive.

One way AI detects coronavirus is with cameras equipped with thermal sensors.

A Singapore hospital and public health facility is performing real-time temperature checks, thanks to startup KroniKare, with a smartphone and thermal sensor.

An AI system developed by Chinese tech company Baidu that uses an infrared sensor and AI to predict peoples temperatures is now in use in Beijings Qinghe Railway Station, according to an email sent to Baidu employees that was shared with VentureBeat.

Above: Health officers screen arriving passengers from China with thermal scanners at Changi International airport in Singapore on January 22, 2020.

Image Credit: Roslan Rahman / Getty Images

The Baidu approach combines computer vision and infrared to detect the forehead temperature of up to 200 people a minute within a range of 0.5 degree Celsius. The system alerts authorities if it detects a person with a temperature above 37.3 degree Celsius (99.1 degrees Fahrenheit) since fever is a tell-tale sign of coronavirus. Baidu may implement its temperature monitoring next in Beijing South Railway Station and Line 4 of the Beijing Subway.

Last month, Shenzhen MicroMultiCopter said in a statement that its deployed more than 100 drones capable in various Chinese cities. The drones are capable of not only thermal sensing but also spraying disinfectant and patrolling public places.

One company, BlueDot, says it recognized the emergence of high rates of pneumonia in China nine days before the World Health Organization. BlueDot was founded in response to the SARS epidemic. It uses natural language processing (NLP) to skim the text of hundreds of thousands of sources to scour news and public statements about the health of humans or animals.

Metabiota, a company thats working with the U.S. Department of Defense and intelligence agencies, estimates the risk of a disease spreading. It bases its predictions on factors like illness symptoms, mortality rate, and the availability of treatment.

The 40-page WHO-China Mission report released last month about initial response to COVID-19 cites how the country used big data and AI as part of its response to the disease. Use cases include AI for contact tracing to monitor the spread of disease and management of priority populations.

But academics, researchers, and health professionals are beginning to produce other forms of AI as well.

On Sunday, researchers from Renmin Hospital of Wuhan University, Wuhan EndoAngel Medical Technology Company, and China University of Geosciences shared work on deep learning that detected COVID-19 with what they claim is 95% accuracy. The model is trained with CT scans of 51 patients with laboratory-confirmed COVID-19 pneumonia and more than 45,000 anonymized CT scan images.

The deep learning model showed a performance comparable to expert radiologists and improved the efficiency of radiologists in clinical practice. It holds great potential to relieve the pressure on frontline radiologists, improve early diagnosis, isolation, and treatment, and thus contribute to the control of the epidemic, reads a preprint paper about the model published in medrxiv.org. (A preprint paper means it has not yet undergone peer review.)

The researchers say the model can decrease confirmation time from CT scans by 65%. In similar efforts taking place elsewhere, machine learning from Infervision thats trained on hundreds of thousands of CT scans is detecting coronavirus in Zhongnan Hospital in Wuhan.

In initial results shared in another preprint paper updated today on medrxiv using clinical data from Tongji hospital in Wuhan, a new system is capable of predicting survival rates with more than 90% accuracy.

The work was done by researchers from the School of Artificial Intelligence and Automation, as well as other departments from Huazhong University of Science and Technology in China.

The coathors say that coronavirus survival estimation today can draw from more than 300 lab or clinical results, but their approach only considers results related to lactic dehydrogenase (LDH), lymphocyte, and high-sensitivity C-reactive protein (hsCRP).

In another paper Deep Learning for Coronavirus Screening, released last month on arXiv by collaborators working with the Chinese government, the model uses multiple CNN models to classify CT image datasets and calculate the infection probability of COVID-19. In preliminary results, they claim the model is able to predict the difference between COVID-19, influenza-A viral pneumonia, and healthy cases with 86.7% accuracy.

The deep learning model is trained with CT scans of influenza patients, COVID-19 patients, and healthy people from three hospitals in Wuhan, including 219 images from 110 patients with COVID-19.

Because the outbreak is spreading so quickly, those on the front lines need tools to help them identity and treat affected people with just as much speed. The tools need to be accurate, too. Its unsurprising that there are already AI-powered solutions deployed in the wild, and its almost a certainty that more are forthcoming from the public and private sector alike.

Excerpt from:

How people are using AI to detect and fight the coronavirus - VentureBeat

Posted in Ai | Comments Off on How people are using AI to detect and fight the coronavirus – VentureBeat

Vatican AI Ethics Pledge Will Struggle To Be More Than PR Exercise – Forbes

Posted: at 6:24 pm

The Vatican is seeking to encourage more tech companies to consider the ethical implications of ... [+] technology when designing and using AI systems. (Photo by Tiziana FABI / AFP) (Photo by TIZIANA FABI/AFP via Getty Images)

The Vatican cares about AI. Last week, it signed an ethical resolution on the use of artificial intelligence. Co-signed by IBM and Microsoft, this resolution lays down a number of principles for the development and deployment of AI-driven technology. It also commits the co-signatories to collaborate with the Roman Catholic Church in order to "promote 'algor-ethics', namely the ethical use of AI."

Superficially, the Vatican's resolution is timely and very well-intentioned. However, its unlikely to succeed in making AI more ethical, for a number of significant reasons.

Dubbed the Rome Call for AI Ethics, the resolution voluntarily commits signatories to uphold six principles when designing AI:

Given that artificial intelligence already has a bad rap for discriminating against women and ethnic minorities, the need to address its ethical implications is growing stronger by the day. As such, it's not surprising to hear the declaration's co-signatories herald its signing as a milestone in the development of artificial intelligence.

"Microsoft is proud to be a signatory of the Rome Call for AI Ethics, which is an important step in promoting a thoughtful, respectful, and inclusive conversation on the intersection of digital technology and humanity," said Microsoft President Brad Smith.

Likewise, IBM's VP John Kelly praised the initiative for focusing on the question of who will benefit from the proliferation of AI. "The Rome Call for AI Ethics reminds us that we have to choose carefully whom AI will benefit and we must make significant concurrent investments in people and skills. Society will have more trust in AI when people see it being built on a foundation of ethics, and that the companies behind AI are directly addressing questions of trust and responsibility."

There's no doubt that the AI and wider tech industry has serious problems involving the ethics of its activities. However, it's highly unlikely that the Vatican's AI initiative will make much of a difference in ensuring an ethical deployment of AI that benefits everyone, rather than just the corporations and governments that exploit AI for economic and political purposes.

First of all, despite talk of collaboration between the Church, academia, and tech companies, the Call for AI Ethics resolution outlines no practical, day-to-day strategy for working towards its wider aims. There's no practical timetable, no scheduled meetings, workshops, conferences, or projects, so it's hard to envisage how the laudable call for more ethical AI will actually be put into practice and implemented.

The Call for AI Ethics is intended more as an abstract incitement to AI companies to work towards ethical AI, rather than a concrete blueprint for how they might actually do this on the ground. This is suggested by Archbishop Vincenzo Paglia, the President of the Pontifical Academy for Life, who signed the Call on behalf of the Vatican.

"The Calls intention is to create a movement that will widen and involve other players: public institutions, NGOs, industries and groups to set a course for developing and using technologies derived from AI," he tells me. "From this point of view, we can say that the first signing of this call is not a culmination, but a starting point for a commitment that appears even more urgent and important than ever before."

Secondly, the six principles themselves are vaguely worded and open to considerable subjective interpretation. Moreover, anyone who's had any recent experience of each of the principles on their own will know that corporations and people conceive of them quite differently.

For example, "privacy" for a company like, say, Facebook is arguably not real privacy. Yes, Facebook can generally perform a reliable job of ensuring that other members of the public don't somehow get to view your Facebook posts and photos. Nonetheless, seeing as how pretty much everything you do on and off Facebook is monitored by Facebook itself, this isn't complete privacy. It's privacy from other people, not from companies.

Analogously, tech companies may in the future be great at ensuring that no cybercriminal hacks into the data their AI algorithms have mined from you. Still, the explosion in the use of AI to mine data will inevitably result in a concomitant explosion of personal data mined by tech corporations and sold off to other corporations. Again, privacy from people, not from companies.

Very similar points could be made about the other principles. In the case of transparency, "explainable AI" generally only works at certain levels of complexity, so that not every aspect of an AI system could be fully transparent and explainable. More fundamentally, tech companies may be able to explain the parameters they've set for their AI models, but not the wider business, commercial, social and even political ramifications these models could have once deployed.

On top of this, some of the principles are basically tautological, to the point of being almost meaningless. The third principle, that of "responsiblity," declares that "those who design and deploy the use of AI must proceed with responsibility." Put simply, to be ethical you have to be responsible. Very helpful indeed.

Then there's a deep misapprehension which undermines the substance of two of the other principles, "Impartiality" and "Inclusion." According to the Call for AI Ethics, impartiality dictates that AI developers should "not create or act according to bias." Well, perhaps developers can avoid being deliberately and maliciously biased, but bias is inevitable when designing any kind of AI. That's because developers have to select a certain data set when training their AI models, and they have to select certain factors or parameters that any algorithm will use to process said data. This entails a certain degree of bias. Always. Because an AI can't incorporate all possible data and all possible parameters.

In sum, the Vatican's AI principles are too insubstantial and fluffy. But more fatally, they also make the mistake of approaching the whole issue of AI ethics from back-to-front. That is, the problem that really needs to be addressed here is not AI ethics but, rather, the ethics of every company and organisation that seeks to develop and deploy AI, as well as the ethics of the economic and political system in which these companies and organisations operate. Because it's no good obsessing over the transparency and reliability of an AI system if it's going to be used by a company whose business model rests on exploiting workers, or by a military whose main job is killing people.

The Vatican recognises this aspect of the issue, even if the Call for AI Ethics doesn't explicitly address it. Archbishop Vincenzo Paglia tells me, "There is a political dimension to the production and use of artificial intelligence, which has to do with more than the expanding of its individual and purely functional benefits. In other words, it is not enough simply to trust in the moral sense of researchers and developers of devices and algorithms. There is a need to create intermediate social bodies that can incorporate and express the ethical sensibilities of users and educators."

Indeed, if organisations aren't really committed to being ethical in general, then no number of ethical AI initiatives is going to stop them from using AI in unethical ways. And in this respect it's interesting to note the lack of signatories to the Vatican's AI principles. So far, it would seem, the vast majority of the globe's corporations want to use AI for unethical purposes.

That said, Archbishop Paglia confirms that the Vatican is working towards attracting other corporations. "Certainly the work continues," he says. "There are contacts with other companies to create a wide convergence on the contents of the Call. For this we already have an appointment scheduled in exactly one year, for a verification of the work done."

But without a bigger body of signatories, without more detail on the six principles, and without addressing the underlying issues of social, economic and political ethics, the Vatican's Call for AI Ethics isn't likely to achieve much. At the moment, it seems like a glorified PR stunt, one way the Roman Catholic Church can appear relevant, and one way big tech powerhouses like IBM and Microsoft can appear ethical. But let's hope history proves such scepticism wrong.

Link:

Vatican AI Ethics Pledge Will Struggle To Be More Than PR Exercise - Forbes

Posted in Ai | Comments Off on Vatican AI Ethics Pledge Will Struggle To Be More Than PR Exercise – Forbes

What is AIOps? Injecting intelligence into IT operations – CIO

Posted: at 6:24 pm

Cloud platforms, managed service providers and organizations undertaking digital transformations are beginning to reap the benefits of an emerging IT trend: the use of AI-powered IT operations technology to monitor and manage the IT portfolio automatically.

This emerging practice, known as AIOps, is helping enterprises head off potential outages and performance issues before they negatively impact operations, customers, and the bottom line. But the more advanced deployments are beginning to use AI systems not just to identify issues, or to predict issues before they happen, but to react to events with intelligent, automated mitigation.

But what exactly is AIOps and how are organizations putting it to use today? Here we take a deeper look at the technologies, strategies, and challenges of AI-assisted IT operations.

AIOps is an emerging IT practice that applies artificial intelligence to IT operations to help organizations intelligently manage infrastructure, networks, and applications for performance, resilience, capacity, uptime, and, in some cases, security. By shifting traditional, threshold-based alerts and manual processes to systems that take advantage of AI and machine learning, AIOps enables organizations to better monitor IT assets and anticipate negative incidents and impacts before they take hold.

Go here to read the rest:

What is AIOps? Injecting intelligence into IT operations - CIO

Posted in Ai | Comments Off on What is AIOps? Injecting intelligence into IT operations – CIO

Will AI replace my job as a CTO? yes and no – Information Age

Posted: at 6:24 pm

Dr Daniel Susskind, author of the bestseller A World Without Work, thinks that CTOs need to think of their roles in terms of tasks and which ones could be done by AI

How will AI impact job functions?

Will AI replace my job as a CTO? is one of the most pressing questions any senior technology leader must be asking themselves.

The answer is both yes and no, according to Dr Daniel Susskind, Oxford University economics fellow and author of bestseller A World Without Work, a study of how artificial intelligence is going to displace many white-collar jobs in professions such as law, accounting, education and HR.

What Susskind calls the AI fallacy is projecting onto computers the same kind of human skills that are valued so highly in the professions empathy, judgement, creativity and therefore deciding computers fall short.

This misunderstands AI capability, says Susskind, as its pattern recognition and data storage capabilities far outstrip anything humans can do.

Susskind was speaking at the Advanced World conference in Birmingham on March 5th 2020.

Instead of thinking AI is going to completely take over the professions something we are already seeing in IT departments with AI chatbots handling lower-level tickets it would be better to think of any job as a collection of tasks, some of which are better suited for automation than others.

Andrew Zitney explores his role as SVP, CTO of McKesson Technology with Information Age: how its changing and how to free up the IT department and developer teams. Read here

Susskind said: Any job can broken down into tasks that are relatively routine and process based. Weve got to think from the bottom up in terms of individual tasks. A lot of these tasks turn out to be relatively routine.

And according to a McKinsey survey of 820 occupations in the US, only 5% of occupations consist of activities that are 100% automatable.

The survey also found that 60% of all jobs involved repetitive tasks that could be automated to an extent given current AI.

And in an example of how automation can change the lives of all CTOs and CIOs for the better, a recent Asana survey of over 10,000 workers found they spent only 27% of their time doing what they were supposed to be doing in their job the rest of their time was spent doing robotic and automatable tasks.

Susskind said that the key to unlocking the AI challenge is education, the format of which has not changed for hundreds of years and trains people to do those activities that are going to be displaced. In future, education should focus on two areas said Susskind: either the kind of roles that machines cannot yet do or helping to design and build AI machines and putting them to use.

He foresees everybody being in a state of continual education, rather than people mostly cutting off at school-leaving age.

Susskind said: We dont face mass unemployment. Were going to see more demand for the people who can perform the types of tasks that cant be automated. The challenge as I see it is how do we prepare people for the change in work thats going to come?

View post:

Will AI replace my job as a CTO? yes and no - Information Age

Posted in Ai | Comments Off on Will AI replace my job as a CTO? yes and no – Information Age

Unleashing the power of AI for education – MIT Technology Review

Posted: at 6:24 pm

Artificial intelligence (AI) is a major influence on the state of education today, and the implications are huge. AI has the potential to transform how our education system operates, heighten the competitiveness of institutions, and empower teachers and learners of all abilities.

Dan Ayoub is general manager of education at Microsoft.

The opportunities for AI to support education are so broad that recently Microsoft commissioned research on this topic from IDCto understand where the company can help. The findings illustrate the strategic nature of AI in education and highlight the need for technologies and skills to make the promise of AI a reality.

The results showed almost universal acceptance among educators that AI is important for their future99.4% said AI would be instrumental to their institutions competitiveness within the next three years, with 15% calling it a game-changer. Nearly all are trying to work with it too92% said they have started to experiment with the technology.

Yet on the other hand, most institutions still lack a formal data strategy or practical measures in place to advance AI capabilities, which remains a key inhibitor. The finding indicates that although the vast majority of leaders understand the need for an AI strategy, they may lack clarity on how to implement one. And it could be that they just dont know where to start.

David Kellermann has become a pioneer in how to use AI in the classroom. At the University of New South Wales in Sydney, Australia, Kellermann has built a question bot capable of answering questions on its own or delivering video of past lectures. The bot can also flag student questions for teaching assistants (TAs) to follow up. Whats more, it keeps getting better at its job as its exposed to more and different questions over time.

Kellermann began his classrooms transformation with a single Surface laptop. Hes also employed out-of-the-box systems like Microsoft Teams to foster collaboration among his students. Kellermann used the Microsoft Power Platform to build the question bot, and hes also built a dashboard using Power BI that plots the classs exam scores and builds personalized study packs based on students past performance.

Educators see AI as instrumental to their institutions competitiveness, yet most institutions still lack a formal data strategy to advance AI.

Kellermanns project illustrates a key principle for organizations in nearly every industry when it comes to working with AI and machine learningknowing where to start, starting small, and adding to your capabilities over time. The potential applications of AI are so vast, even the most sophisticated organizations can become bogged down trying to do too much, too soon. Often, it comes down to simply having a small goal and building from there.

As an AI initiative gradually grows and becomes more sophisticated, its also important to have access to experts who can navigate technology and put the right systems in place. To gain a foothold with AI, institutions need tools, technologies, and skills.

This is a big focus for our work at Microsoftto support educational institutions and classrooms. Weve seen the strides some institutions have already taken to bring the potential of AI technologies into the classroom. But we also know there is much more work to do. Over the next few years, AIs impact will be felt in several waysmanaging operations and processes, data-driven programs to increase effectiveness, saving energy with smart buildings, creating a modern campus with a secure and safe learning environment.

But its most important and far-reaching impact may lie in AIs potential to change the way teachers teach and students learn, helping maximize student success and prepare them for the future.

Collective intelligence tools will be available to save teachers time with tasks like grading papers so teachers and TAs can spend more time with students. AI can help identify struggling students through behavioral cues and give them a nudge in the right direction.

AI can also help educators foster greater inclusivityAI-based language translation, for example, can enable more students with diverse backgrounds to participate in a class or listen to a lecture. Syracuse Universitys School of Information Studies is working to drive experiential learning for students while also helping solve real-world problems, such as Our Ability, a website that helps people with disabilities get jobs.

AI has the power to become an equalizer in education and a key differentiator for institutions that embrace it.

Schools can even use AI to offer a truly personalized learning experienceovercoming one of the biggest limitations of our modern, one-to-many education model. Kellermanns personalized learning system in Sydney shows that the technology is here today.

AI has the power to become a great equalizer in education and a key differentiator for institutions that embrace it. Schools that adopt AI in clever ways are going to show better student success and empower their learners to enter the work force of tomorrow.

Given its importance, institutions among that 92% should start thinking about the impact they can achieve with AI technologies now. Do you want to more quickly grade papers? Empower teachers to spend more time with students? Whatever it is, its important to have that goal in mind, and then maybe dream a little.

This is a movement still in its early days, and there is an opportunity for institutions to learn from one another. As our customers build out increasingly sophisticated systems, Microsoft is learning and innovating along with them, helping build out the tools, technologies, and services to turn the vision for AI into reality.

Read more from the original source:

Unleashing the power of AI for education - MIT Technology Review

Posted in Ai | Comments Off on Unleashing the power of AI for education – MIT Technology Review

AI, AI, Captain! How the Mayflower Autonomous Ship will cross the Atlantic – VentureBeat

Posted: at 6:24 pm

While self-driving cars have hogged the headlinesfor the past few years, other forms of autonomous transport are gaining steam.

This month, IBM and Promare a U.K.-based marine research and exploration charity will trial a prototype of an artificial intelligence (AI)-powered maritime navigation system ahead of a September 6th venture to send a crewless ship across the Atlantic Ocean on the very same route the original Mayflower traversed 400 years ago.

The original Mayflower ship, which in 1620 carried the first English settlers to the U.S., traveled from Plymouth in the U.K. to what is today known as Plymouth, Massachusetts. Mayflower version 1.0 was a square-rigged sail ship, like many merchant vessels of the era, and relied purely on wind and human navigation techniques to find its way to the New World. The Mayflower Autonomous Ship (MAS), on the other hand, will be propelled by a combination of solar- and wind-generated power, with a diesel generator on board as backup.

Moreover, while the first Mayflower traveled at a maximum speed of around 2.5 knots and took some two months to reach its destination, the upgraded version moves at a giddy 20 knots and should arrive in less than two weeks.

The mission, first announced back in October, aims to tackle all the usual obstacles that come with navigating a ship through treacherous waters, except without human intervention.

The onboard AI Captain, as its called, cant always rely on GPS and satellite connectivity, and speed is integral to processing real-time data. This is why all the AI and navigational smarts must be available locally, making edge computing pivotal to the ventures success.

Edge computing is critical to making an autonomous ship like the Mayflower possible, noted Rob High, IBMs CTO for edge computing. The ship needs to sense its environment, make smart decisions about the situation, and then act on these insights in the minimum amount of time even in the presence of intermittent connectivity, and all while keeping data secure from cyberthreats.

The team behind the new Mayflower has been training the ships AI models for the past few years, using millions of maritime images collected from cameras in the Plymouth Sound, in addition to other open source data sets.

For machine learning prowess, the ship is using an IBM Power AC922 system, which is used in some of the worlds biggest AI supercomputers. Alongside IBMs PowerAI Vision, the Mayflowers AI Captain is built to detect and identify ships and buoys as well as other hazards, including debris and to make decisions about what to do next.

For example, if the MAS encounters a cargo ship that has shed some of its load after colliding with another vessel, the AI Captain will be called into action and can use any combination of onboard sensors and software to circumvent the obstacles. The radar can detect hazards in the water ahead, with cameras providing additional visual data on objects in the water.

Moreover, an automatic identification system (AIS) can tap into specific information about any vessels ahead, including their class, weight, speed, cargo type, and so on. Radio broadcast warnings from the cargo ship can also be accepted and interpreted, with the AI Captain ready to decide on a change of course.

Other data the AI Captain can tap into includes the navigation system and nautical chart server, which provide the current location, speed, course, and route of the ship, as well as attitude sensors for monitoring the state of the sea and a fathometer for water depth.

The onboard vehicle management system also provides crucial data, such as the battery charge level and power consumption, that can be used to determine the best route around a hazardous patch of ocean, with weather forecasts informing the final decision.

Crucially, the AI Captain can communicate vocally with other ships in the vicinity to convey any change in plans.

The MAS ship itself is still being constructed in Gdansk, Poland, and the AI Captain will be tested this month in a manned research ship called the Plymouth Quest, which is owned by the U.K.s Plymouth Marine Laboratory. The test will essentially determine how the AI Captain performs in real-world scenarios, and feedback will be used to refine the main vessels machine learning smarts before the September launch.

Maritime transport constitutes around 90% of global trade, as its the most cost-effective way of transporting goods in bulk. But shipping is widely regarded as a major source of pollution for the planet. Like self-driving cars, a major benefit of electrified autonomous ships is that they reduce emissions while also promising fewer accidents at least three quarters of maritime accidents are thought to be caused by human error.

Moreover, crewless ships open the doors to longer research missions, as food, well-being, and salaries are no longer logistical or budgetary considerations.

There has been a push toward fully automating sea-faring transport in recent years. Back in 2016, news emerged that an unmanned warship called Sea Hunter was being developed by research agency DARPA, which passed the Sea Hunter prototype on to the Office of Naval Research two years later for further iteration. In Norway, a crewless cargo ship called the Yara Birkeland has also been in development for the past few years and is expected to go into commercial operation later in 2020. And the Norwegian University of Science and Technology (NNTU) has carried out trialsof atiny electric driverless passenger ferry.

Elsewhere,Rolls-Royce previously demonstrated a fully autonomous passenger ferry in Finland and announced a partnership with Intel as part of a grand plan to bring self-guided cargo ships to the worlds seas by 2025.

So plenty is happening in the self-navigating ship sphere a recent report from Allied Research pegged the industry at $88 billion today, and it could hit $130 billion within a decade. But while others seek to automate various aspects of a ships journey, the new Mayflower is designed to be completely self-sufficient and operate without any direct human intervention.

Many of todays autonomous ships are really just automated robots [that] do not dynamically adapt to new situations and rely heavily on operator override, said Don Scott, CTO of the Mayflower Autonomous Ship. Using an integrated suite of IBMs AI, cloud, and edge technologies, we are aiming to give the Mayflower full autonomy and are pushing the boundaries of whats currently possible.

Four centuries after the Mayflower carried the Pilgrims across the Atlantic, we could be entering a whole new era of maritime adventures.

Follow this link:

AI, AI, Captain! How the Mayflower Autonomous Ship will cross the Atlantic - VentureBeat

Posted in Ai | Comments Off on AI, AI, Captain! How the Mayflower Autonomous Ship will cross the Atlantic – VentureBeat

Page 178«..1020..177178179180..190200..»