AI and machine learning is not the future, it’s the present – Eyes on APAC – ComputerWeekly.com

This is a guest post by Raju Vegesna, chief evangelist at Zoho

For many, artificial intelligence (AI) is a distant and incomprehensible concept associated only with science fiction movies or high-tech laboratories.

In reality, however, AI and machine learning is already changing the world we know. From TVs and toothbrushes to real-time digital avatars that interact with humans, the recent CES show demonstrated how widespread AI is becoming in everyday life.

The same can be said of the business community, with the latest Gartner research revealing that 37% of organisations had implemented some form of AI or machine learning.

So far, these technologies have largely been adopted and implemented more by larger organisations with the resources and expertise to seamlessly integrate them into their business. But technology has evolved significantly in recent years, and SaaS (software as a service) providers now offer integrated technology and AI that meets the needs and budgets of small and medium businesses too.

Here are a few evolving trends in AI and machine learning that businesses of all sizes could capitalise on in 2020 and beyond.

The enterprise software marketplace is expanding rapidly. More vendors are entering the market, often with a growing range of solutions, which creates confusion for early adopters of the technology. Integrating new technologies from a range of different vendors can be challenging, even for large enterprise organisations.

So, in 2020 and beyond, the businesses that will make the most of AI and machine learning are the ones implementing single-vendor technology platforms. Its a challenge to work with data that is scattered across different applications using different data models, but organisations that consolidate all its data in one integrated platform will find it much easier to feed it into a machine learning algorithm.

After all, the more data thats available, the more powerful your AI and machine learning models will be. By capitalising on the wealth of data supplied by integrated software platforms, advanced business applications will be able to answer our questions or help us navigate interfaces. If youre a business owner, planning to utilise AI and machine learning for your business in 2020, then the single-vendor strategy is the way to go.

Technology has advanced at such a rate that businesses no longer need to compromise to fit the technology. This type of hyper-personalisation increases productivity for business software users and will continue to be a prime focus for businesses in 2020.

Take, for example, the rise of algorithmic social media timelines we have seen in the last few years. For marketers, AI and machine learning mean personalisation is becoming more and more sophisticated, allowing businesses to supercharge and sharpen their focus on their customers. Companies which capture insights to create personalised customer experiences and accelerate sales will likely win in 2020.

With AI and machine learning, vast amounts of data is processed every second of the day. In 2020, one of the significant challenges faced by companies implementing AI and machine learning is data cleansing the process of detecting, correcting or removing corrupt or inaccurate records from a data set.

Smaller organisations can begin to expect AI functionality in everyday software like spreadsheets, where theyll be able to parse information out of addresses or clean up inconsistencies. Larger organisations, meanwhile, will benefit from AI that ensures their data is more consumable for analytics or prepares it for migration from one application to another.

Businesses can thrive with the right content and strategic, innovative marketing. Consider auto-tagging, which could soon become the norm. Smartphones can recognise and tag objects in your photos, making your photo library much more searchable. Well start to see business applications auto-tag information to make it much more accessible.

Thanks to AI, customer relationship management (CRM) systems will continue to be a fantastic and always-advancing channel through which businesses can market to their customers. Today, businesses can find its top customers in a CRM system by running a report and sorting by revenue or sales. In the coming years, businesses will be able to search top customers, and its CRM system will know what theyre looking for.

With changing industry trends and demands, its important for all businesses to use the latest technology to create a positive impact on its operations. In 2020 and beyond, AI and machine learning will support businesses by helping them reduce manual labour and enhance productivity.

While some businesses, particularly small businesses, might be apprehensive about AI, it is a transformation that is bound to bring along a paradigm shift for those that are ready to take a big step towards a technology-driven future.

See original here:
AI and machine learning is not the future, it's the present - Eyes on APAC - ComputerWeekly.com

AI could help with the next pandemicbut not with this one – MIT Technology Review

It was an AI that first saw it coming, or so the story goes. On December 30, an artificial-intelligence company called BlueDot, which uses machine learning to monitor outbreaks of infectious diseases around the world, alerted clientsincluding various governments, hospitals, and businessesto an unusual bump in pneumonia cases in Wuhan, China. It would be another nine days before the World Health Organization officially flagged what weve all come to know as Covid-19.

BlueDot wasnt alone. An automated service called HealthMap at Boston Childrens Hospital also caught those first signs. As did a model run by Metabiota, based in San Francisco. That AI could spot an outbreak on the other side of the world is pretty amazing, and early warnings save lives.

You can read all of ourcoverage of the coronavirus/Covid-19 outbreakfor free, and also sign up for ourcoronavirus newsletter. But pleaseconsider subscribingto support our nonprofit journalism..

But how much has AI really helped in tackling the current outbreak? Thats a hard question to answer. Companies like BlueDot are typically tight-lipped about exactly who they provide information to and how it is used. And human teams say they spotted the outbreak the same day as the AIs. Other projects in which AI is being explored as a diagnostic tool or used to help find a vaccine are still in their very early stages. Even if they are successful, it will take timepossibly monthsto get those innovations into the hands of the health-care workers who need them.

The hype outstrips the reality. In fact, the narrative that has appeared in many news reports and breathless press releasesthat AI is a powerful new weapon against diseasesis only partly true and risks becoming counterproductive. For example, too much confidence in AIs capabilities could lead to ill-informed decisions that funnel public money to unproven AI companies at the expense of proven interventions such as drug programs. Its also bad for the field itself: overblown but disappointed expectations have led to a crash of interest in AI, and consequent loss of funding, more than once in the past.

So heres a reality check: AI will not save us from the coronaviruscertainly not this time. But theres every chance it will play a bigger role in future epidemicsif we make some big changes. Most wont be easy. Some we wont like.

There are three main areas where AI could help: prediction, diagnosis, and treatment.

Prediction

Companies like BlueDot and Metabiota use a range of natural-language processing (NLP) algorithms to monitor news outlets and official health-care reports in different languages around the world, flagging whether they mention high-priority diseases, such as coronavirus, or more endemic ones, such as HIV or tuberculosis. Their predictive tools can also draw on air-travel data to assess the risk that transit hubs might see infected people either arriving or departing.

The results are reasonably accurate. For example, Metabiotas latest public report, on February 25, predicted that on March 3 there would be 127,000 cumulative cases worldwide. It overshot by around 30,000, but Mark Gallivan, the firms director of data science, says this is still well within the margin of error. It also listed the countries most likely to report new cases, including China, Italy, Iran, and the US. Again: not bad.

Sign up for The Algorithm artificial intelligence, demystified

Others keep an eye on social media too. Stratifyd, a data analytics company based in Charlotte, North Carolina, is developing an AI that scans posts on sites like Facebook and Twitter and cross-references them with descriptions of diseases taken from sources such as the National Institutes of Health, the World Organisation for Animal Health, and the global microbial identifier database, which stores genome sequencing information.

Work by these companies is certainly impressive. And it goes to show how far machine learning has advanced in recent years. A few years ago Google tried to predict outbreaks with its ill-fated Flu Tracker, which was shelved in 2013 when it failed to predict that years flu spike. What changed? It mostly comes down to the ability of the latest software to listen in on a much wider range of sources.

Unsupervised machine learning is also key. Letting an AI identify its own patterns in the noise, rather than training it on preselected examples, highlights things you might not have thought to look for. When you do prediction, you're looking for new behavior, says Stratifyds CEO, Derek Wang.

But what do you do with these predictions? The initial prediction by BlueDot correctly pinpointed a handful of cities in the viruss path. This could have let authorities prepare, alerting hospitals and putting containment measures in place. But as the scale of the epidemic grows, predictions become less specific. Metabiotas warning that certain countries would be affected in the following week might have been correct, but it is hard to know what to do with that information.

Whats more, all these approaches will become less accurate as the epidemic progresses, largely because reliable data of the sort that AI needs to feed on has been hard to get about Covid-19. News sources and official reports offer inconsistent accounts. There has been confusion over symptoms and how the virus passes between people. The media may play things up; authorities may play things down. And predicting where a disease may spread from hundreds of sites in dozens of countries is a far more daunting task than making a call on where a single outbreak might spread in its first few days. Noise is always the enemy of machine-learning algorithms, says Wang. Indeed, Gallivan acknowledges that Metabiotas daily predictions were easier to make in the first two weeks or so.

One of the biggest obstacles is the lack of diagnostic testing, says Gallivan. Ideally, we would have a test to detect the novel coronavirus immediately and be testing everyone at least once a day, he says. We also dont really know what behaviors people are adoptingwho is working from home, who is self-quarantining, who is or isnt washing handsor what effect it might be having. If you want to predict whats going to happen next, you need an accurate picture of whats happening right now.

Its not clear whats going on inside hospitals, either. Ahmer Inam at Pactera Edge, a data and AI consultancy, says prediction tools would be a lot better if public health data wasnt locked away within government agencies as it is in many countries, including the US. This means an AI must lean more heavily on readily available data like online news. By the time the media picks up on a potentially new medical condition, it is already too late, he says.

But if AI needs much more data from reliable sources to be useful in this area, strategies for getting it can be controversial. Several people I spoke to highlighted this uncomfortable trade-off: to get better predictions from machine learning, we need to share more of our personal data with companies and governments.

Darren Schulte, an MD and CEO of Apixio, which has built an AI to extract information from patients records, thinks that medical records from across the US should be opened up for data analysis. This could allow an AI to automatically identify individuals who are most at risk from Covid-19 because of an underlying condition. Resources could then be focused on those people who need them most. The technology to read patient records and extract life-saving information exists, says Schulte. The problem is that these records are split across multiple databases and managed by different health services, which makes them harder to analyze. Id like to drop my AI into this big ocean of data, he says. But our data sits in small lakes, not a big ocean.

Health data should also be shared between countries, says Inam: Viruses dont operate within the confines of geopolitical boundaries. He thinks countries should be forced by international agreement to release real-time data on diagnoses and hospital admissions, which could then be fed into global-scale machine-learning models of a pandemic.

Of course, this may be wishful thinking. Different parts of the world have different privacy regulations for medical data. And many of us already balk at making our data accessible to third parties. New data-processing techniques, such as differential privacy and training on synthetic data rather than real data, might offer a way through this debate. But this technology is still being finessed. Finding agreement on international standards will take even more time.

For now, we must make the most of what data we have. Wangs answer is to make sure humans are around to interpret what machine-learning models spit out, making sure to discard predictions that dont ring true. If one is overly optimistic or reliant on a fully autonomous predictive model, it will prove problematic, he says. AIs can find hidden signals in the data, but humans must connect the dots.

Early diagnosis

As well as predicting the course of an epidemic, many hope that AI will help identify people who have been infected. AI has a proven track record here. Machine-learning models for examining medical images can catch early signs of disease that human doctors miss, from eye disease to heart conditions to cancer. But these models typically require a lot of data to learn from.

A handful of preprint papers have been posted online in the last few weeks suggesting that machine learning can diagnose Covid-19 from CT scans of lung tissue if trained to spot telltale signs of the disease in the images. Alexander Selvikvg Lundervold at the Western Norway University of Applied Sciences in Bergen, Norway, who is an expert on machine learning and medical imaging, says we should expect AI to be able to detect signs of Covid-19 in patients eventually. But it is unclear whether imaging is the way to go. For one thing, physical signs of the disease may not show up in scans until some time after infection, making it not very useful as an early diagnostic.

AP Images

Whats more, since so little training data is available so far, its hard to assess the accuracy of the approaches posted online. Most image recognition systemsincluding those trained on medical imagesare adapted from models first trained on ImageNet, a widely used data set encompassing millions of everyday images. To classify something simple that's close to ImageNet data, such as images of dogs and cats, can be done with very little data, says Lundervold. Subtle findings in medical images, not so much.

Thats not to say it wont happenand AI tools could potentially be built to detect early stages of disease in future outbreaks. But we should be skeptical about many of the claims of AI doctors diagnosing Covid-19 today. Again, sharing more patient data will help, and so will machine-learning techniques that allow models to be trained even when little data is available. For example, few-shot learning, where an AI can learn patterns from only a handful of results, and transfer learning, where an AI already trained to do one thing can be quickly adapted to do something similar, are promising advancesbut still works in progress.

Cure-all

Data is also essential if AI is to help develop treatments for the disease. One technique for identifying possible drug candidates is to use generative design algorithms, which produce a vast number of potential results and then sift through them to highlight those that are worth looking at more closely. This technique can be used to quickly search through millions of biological or molecular structures, for example.

SRI International is collaborating on such an AI tool, which uses deep learning to generate many novel drug candidates that scientists can then assess for efficacy. This is a game-changer for drug discovery, but it can still take many months before a promising candidate becomes a viable treatment.

In theory, AIs could be used to predict the evolution of the coronavirus too. Inam imagines running unsupervised learning algorithms to simulate all possible evolution paths. You could then add potential vaccines to the mix and see if the viruses mutate to develop resistance. This will allow virologists to be a few steps ahead of the viruses and create vaccines in case any of these doomsday mutations occur, he says.

Its an exciting possibility, but a far-off one. We dont yet have enough information about how the virus mutates to be able to simulate it this time around.

In the meantime, the ultimate barrier may be the people in charge. What Id most like to change is the relationship between policymakers and AI, says Wang. AI will not be able to predict disease outbreaks by itself, no matter how much data it gets. Getting leaders in government, businesses, and health care to trust these tools will fundamentally change how quickly we can react to disease outbreaks, he says. But that trust needs to come from a realistic view of what AI can and cannot do nowand what might make it better next time.

Making the most of AI will take a lot of data, time, and smart coordination between many different people. All of which are in short supply right now.

Continued here:
AI could help with the next pandemicbut not with this one - MIT Technology Review

AI Is Coming for Your Most Mind-Numbing Office Tasks – WIRED

In 2018, the New York Foundling, a charity that offers child welfare, adoption, and mental health services, was stuck in cut-and-paste hell.

Clinicians and admin staff were spending hours transferring text between different documents and databases to meet varied legal requirements. Arik Hill, the charitys chief information officer, blames the data entry drudgery for an annual staff turnover of 42 percent at the time. We are not a very glamorous industry, says Hill. We are really only just moving on from paper clinical records.

Since then, the New York Foundling has automated much of this grunt work using what are known as software robotssimple programs hand-crafted to perform dull tasks. Often, the programs are built by recording and mimicking a users keystrokes, such as copying a field of text from one database and pasting it into another, eliminating hours of repetitive-stress-inducing work.

It was mind-blowing, says Hill, who says turnover has fallen to 17 percent.

To automate the work, the New York Foundling got help from UiPath, a so-called robotic process automation company. That project didnt require any real machine intelligence.

But in January, UiPath began upgrading its army of software bots to use powerful new artificial intelligence algorithms. It thinks this will let them take on more complex and challenging tasks, such as transcription or sorting images, across more offices. Ultimately, the company hopes software robots will gradually learn how to automate repetitive work for themselves.

In other words, if artificial intelligence is going to disrupt white-collar work, then this may be how it begins.

When paired with robotic process automation, AI significantly expands the number and types of tasks that software robots can perform, says Tom Davenport, a professor who studies information technology and management at Babson College.

Consider a company that needs to summarize long-winded, handwritten notes. AI algorithms that perform character recognition and natural language processing could read the cursive and summarize the text, before a software robot inputs the text into, say, a website. The latest version of UiPaths software includes a range of off-the-shelf machine learning tools. It is also now possible for users to add their own machine learning models to a robotic process.

With all the AI hype, its notable that so little has found its way into modern offices. But the automation that is there, which simply repeats a persons clicking and typing, is still useful. The technology is mostly used by banks, telcos, insurers, and other companies with legacy systems; market researcher Gartner estimates the industry generated roughly $1.3 billion in revenue in 2019.

Supersmart algorithms won't take all the jobs, But they are learning faster than ever, doing everything from medical diagnostics to serving up ads.

Simple software automation is eliminating some particularly repetitive jobs, such as basic data entry, which are often already done overseas. In call centers, fewer people are needed to fill out forms if software can be programmed to open the right documents, find the right fields, and enter text. At the New York Foundling, Hills software allowed him to redirect eight workers to other tasks.

But Davenport says software robots that use AI could displace more jobs, especially if we head into a recession. Companies will use it for substantial headcount and cost reductions, he says.

Erik Brynjolfsson, director of the MIT Initiative on the Digital Economy and the author of several books exploring the impact of technology on the workforce, says robotic process automation will mostly affect middle-skilled office workers, meaning admin work that requires some training.

But it wont happen overnight. He says it took many years for simple software robots, which are essentially descended from screen-scrapers and simple coding tools, to affect office work. The lesson is just how long it takes for even a relatively simple technology to have an impact on business, because of the hard work it takes to implement it reliably in complex environments, Brynjolfsson notes.

Originally posted here:
AI Is Coming for Your Most Mind-Numbing Office Tasks - WIRED

The Impact of Python: How It Could Rule the AI World? – insideBIGDATA

Holdyour head up high! The rise of artificial intelligence (AI) and machinelearning (ML) are poised to bring a new era of civilization and not destroythem.

Yet,theres fear that technology will displace the current workers or tasks, andthats partly true. As predicted by researches, the speed at which AI isreplacing jobs is bound to skyrocket, impacting the jobs of several workerssuch as factory workers, accountants, radiologists, paralegal, and truckers.

Shufflingand transformation of jobs around the workforce are being witnessed, thanks tothe technological epoch.

Buthey, were still far from Terminator.

What can be the odds?

The fear is good, perhaps it is only a matter of time before AI and automation will replace the jobs of millions of tech professionals. A 2018 report by the World Economic Forum suggested that around 75 million jobs will be displaced due to automation and AI in the next five years. The good news is, despite these many jobs will be replaced, at the same time, there will also be a creation of 133 million newer job roles for AI engineers and AI experts.

Simplysaid, within the next five years, there will be near about 58 million newer jobroles in the field of AI.

Insteadof worrying about AI and automation stealing your job, you should beconsidering how you need to reshape your career.

AI and ML in theworkplace: How prepared are you for the impact?

AIand machine learning projects are now leading every industry and sector intothe future of technological advancements. The question is, what are the bestways for you to bring these experiences into reality? What are the programminglanguages that can be used for machine learning and AI?

Thinkahead, you can start by considering Python for machine learning and AI.

But why Python?

Python is the foundational language for AI. However, the projects do differ from a traditional software project, thus, it is necessary to dive deeper into the subject. The crux of building an AI career is by learning Python a programming language that is loved by all because it is both stable and flexible. It is now widely used for machine learning applications and why not, it has become one of the best choices across industries.

Over here, we will list down why Python is the most preferred programming language by AI experts today:

Huge bundle of libraries/frameworks

Itis often a tricky task to choose what best fits while running an ML or an AIalgorithm. It is crucial to have the right set of libraries, a well-structuredenvironment for developers to come up with the best coding solution.

Toease their development timings, most developers rely on Python libraries andframeworks. In a software library, there are already pre-written codes that thedevelopers look up to solve programming challenges. This is where Pythonspre-existing extensive set of libraries play a major role in providing themwith the set of libraries and frameworks to choose from. To name a few are:

With these solutions, it gets easier for the developer to develop your product faster. Even so, the development team needs to waste time finding the libraries that will best suit their project. They can always use an existing library for the implementation of further changes.

Holds a strong community and wide popularity

Accordingto a developer survey Stack Overflow (2018), Python was seen to be among thetop most popular programming language amongst developers. This simply means,for every job that you seek in the job market, AI will always be one of theskillsets that they will look to hire for.

Itis also seen that there are nearly more than 140,000 online repositories thathave custom-built software packages of Python. For instance, Python librariessuch as SciPy, NumPy, and Matplotlib can easily be installed in a program thatruns on Python.

Pythonwas pointed out to be 2019s 8th fastest growing programminglanguage with a growth rate of 151% year on year.

Now, these packages used in machine learning helps AI engineers detect patterns from a large dataset. Pythons popularity is widely known that even Google uses this language to crawl web pages. Pixar, an animation studio uses it to produce movies. Surprisingly, even Spotify uses Python for song recommendation.

Within the past years, Python has managed to grow its community worldwide. You can find multiple platforms and forums where machine learning solutions are shared. For every problem, youve faced youll always find theres already someone who has been through with the same problem. Thus, it is easy to find solutions and guidance through this community.

Platform-independent

This simply means, a programming language or a framework allows developers to implement things on a single machine learning, and the same can be used on another machine learning without further changing anything. The best factor about Python is that it is a language that is platform-independent and is supported by several other platforms such as Windows, macOS, and Linux.

Python code can itself create a standalone program that is executable in most operating systems without even needing a Python interpreter.

Simple and most loved programming language

Python is said to be the simplest and the most consistent programming language offering readable code. While there are complex algorithms that stand along with machine learning, Pythons concise and easy readability allows AI professionals to write easy systems that are reliable. This allows the developers to solve complex machine learning problems instead of dealing with technical issues of the language.

Sofar Python is projected to be the only language that is easy for developers tolearn. Some say Python is intuitive as compared to other programming languages.While others believe, it is due to the number of libraries Python offers thatmakes it suitable for all developers to use.

In conclusion

Pythons power and ease of use has catapulted it to become one of the core languages to provide machine learning solutions. Moreover, AI and ML have been the biggest innovation so far ever since the launch of microchip, developing a career in this realm will pave a way toward the future of tomorrow.

About the Author

Michael Lyam is a writer, AI researcher, business strategist, and top contributor on Medium. He is passionate about technology and is inspired to find new ways to create captivating content. Michaels areas of expertise are: AI, machine learning, data science, and business strategy.

Sign up for the free insideBIGDATAnewsletter.

The rest is here:
The Impact of Python: How It Could Rule the AI World? - insideBIGDATA

This is how the CDC is trying to forecast coronaviruss spread – MIT Technology Review

Every year the US Centers for Disease Control and Prevention holds a competition to see who can accurately forecast the flu. Research teams around the country vie with different methods, and the best performers win funding and a partnership with the agency to improve the nations preparation for the next season.

Now the agency is tapping several dozen teams to adapt their techniques to forecast the spread of the coronavirus in an effort to make more informed decisions. Among them is a group at Carnegie Mellon University that, over the last five years, has consistently achieved some of the best results. Last year, the group was designated one of two National Centers of Excellence for Influenza Forecasting and asked to lead the design of a community-wide forecasting process.

Roni Rosenfeld, head of the group and of CMUs machine-learning department, admits he was initially reluctant to take on the coronavirus predictions. To a layperson, it doesnt seem as if forecasting the two diseases should be so different, but doing so for the novel outbreak is significantly harder. Rosenfeld worried about whether his predictions would be accurateand, thus, whether they would even be useful. In the end, he was convinced to forge ahead anyway.

People act on the basis of forecasting models, whether they are on paper or in their heads, he says. Youre better off quantifying these estimations so you can discuss them rationally as opposed to making them based on intuition.

Sign up for The Download your daily dose of what's up in emerging technology

The lab uses three methods to pinpoint the rise and fall of cases during flu season. The first is whats known as a nowcasta prediction of the current number of people infected. The lab gathers recent and historical data from the CDC and other partner organizations, including flu-related Google searches, Twitter activity, and web traffic on the CDC, medical sites, and Wikipedia. Those data streams are then fed into machine-learning algorithms to make predictions in real time.

The second and third are both proper forecastsa prediction of whats to come. One is based on machine learning and the other on crowdsourced opinion. Predictions include trends expected up to four weeks ahead, as well as important milestones like when the season will peak and the maximum number of expected cases. Such information helps both the CDC and health-care providers ramp up capacity and prepare in advance.

The machine-learning forecast takes into account the nowcast as well as additional historical data from the CDC. There are 20 years of robust data on flu seasons in the US, providing ample fodder for the algorithms.

In contrast, the crowdsourcing method taps into a group of volunteers. Every week, experts and non-expertswho are found to do just as well with a little participation experienceare asked to log on to an online system and review a chart showing the trajectory of past and current flu seasons. They are then asked to complete the current seasons curve, projecting how many more flu cases there will be over time. Though people dont make very good predictions individually, in aggregate they are often just as good as the machine-learning forecast.

Carnegie Mellon University

Over the years, Rosenfelds team has fine-tuned each of its methods to predict the trajectory of the flu with near-perfect accuracy. At the end of each flu season, the CDC always retroactively updates final numbers, giving the CMU lab a chance to see how their projections stack up. The researchers are now adapting all the techniques for Covid-19, but each will pose distinct challenges.

For the machine-learning- based nowcast, many of the data sources will be the same, but the prediction model will be different. The algorithms will need to learn new correlations between the signals in the data and the ground truth. One reason: theres far greater panic around coronavirus, which causes a completely different pattern of online activity. People will look for coronavirus-related information at much higher rates, even if they feel fine, making it more difficult to tell who may already have symptoms.

In a pandemic situation, there is also very little historical data, which will affect both forecasts. The flu happens on a highly regular cycle each year, while pandemics are erratic and rare. The last pandemicH1N1 in 2009also had very different characteristics, primarily affecting younger rather than elderly populations. The Covid-19 outbreak has been precisely the opposite, with older patients facing the highest risk. On top of that, the surveillance systems for tracking cases werent fully developed back then.

Thats the part that I think is going to be the most challenging, says Rosenfeld, because machine-learning systems, in their nature, learn from examples. Hes hopeful that the crowdsourcing method may be more resilient. On the one hand, little is known about how it will fare in pandemic forecasting. On the other hand, people are actually quite good at adjusting to novel circumstances, he says.

Rosenfelds team is now actively working on ways to make these predictions as good as possible. Flu-testing labs are already beginning to transition to Covid-19 testing and reporting results to the CDC. The CMU lab is also reaching out to other organizations to get as much rich and accurate data as possiblethings like anonymized, aggregated statistics from electronic health records and purchasing patterns for anti-fever medicationto find sharper signals to train its algorithms.

To compensate for the lack of historical data from previous pandemics, the team is relying on older data from the current pandemic. Its looking to incorporate data from countries that were hit earlier and will update its machine-learning models as more accurate data is retroactively posted. At the end of every week, the lab will get a report from the CDC with the most up-to-date trajectory of cases in the US, including revisions on numbers from previous weeks. The lab will then revise its models to close the gaps between the original predictions and the rolling statistics.

Rosenfeld worries about the limitations of these forecasts. There is far more uncertainty than what hes usually comfortable with: for every prediction the lab provides to the CDC, it will include a range of possibilities. We're not going to tell you what's going to happen, he says. What we tell you is what are the things that can happen and how likely is each one of them.

Even after the pandemic is over, the uncertainty wont go away. It will be very difficult to tell how good our methods are, he says. You could be accurate for the wrong reasons. You could be inaccurate for the wrong reasons. Because you have only one season to test it on, you cant really draw any strong, robust conclusions about your methodology.

But in spite of all these challenges, Rosenfeld believes the work will be worthwhile in informing the CDC and improving the agencys preparation. I can do the best I can now, he says. Its better than not having anything.

See the original post here:
This is how the CDC is trying to forecast coronaviruss spread - MIT Technology Review

4 ways to fine-tune your AI and machine learning deployments – TechRepublic

Life cycle management of artificial intelligence and machine learning initiatives is vital in order to rapidly deploy projects with up-to-date and relevant data.

Image: Chinnawat Ngamsom, Getty Images/iStockphoto

An institutional finance company wanted to improve time to market on the artificial intelligence (AI) and machine learning (ML) applications it was deploying. The goal was to reduce time to delivery on AI and ML applications, which had been taking 12 to 18 months to develop. The long lead times jeopardized the company's ability to meet its time-to-market goals in areas of operational efficiency, compliance, risk management, and business intelligence.

SEE: Prescriptive analytics: An insider's guide (free PDF) (TechRepublic)

After adopting a life-cycle management software for its AI and ML application development and deployment, the company was able to reduce its AI and ML application time to market to days, and in some cases, to hours. The process improvement enabled corporate data scientists to spend 90% of their time on data model development, instead of 80% of time on the resolution of technical challenges resulting from unwieldy deployment processes.

This is important because the longer you extend your big data and AI and ML modeling, development, and delivery processes, the greater the risk that you end up with modeling, data, and applications that are already out of date by the time they are ready to be implemented. In the compliance area alone, this creates risk and exposure.

"Three big problems enterprises face as they roll out artificial intelligence and machine learning projects is the inability to rapidly deploy projects, data performance decay, and compliance-related liability and losses," said Stu Bailey, chief technical officer of ModelOP, which provides software that deploys, monitors, and governs data science AI and ML models.

SEE:The top 10 languages for machine learning hosted on GitHub (free PDF)(TechRepublic)

Bailey believes that most problems arise out of a lack of ownership and collaborationbetween data science, IT, and business teams when it comes to getting data models into production in a timely manner. In turn, these delays adversely affect profitability and time-to-business insight.

"Another reason that organizations have difficulty managing the life cycle of their data models is that there are many different methods and tools today for producing data science and machine language models, but no standards for how they're deployed and managed," Bailey said.

The management of big data, AI, and ML life cycles can be prodigious tasks that go beyond having software and automation that does some of the "heavy lifting." Also, many organizations lack policies and procedures for these tasks. In this environment, data can rapidly become dated, application logic and business conditions can change, and new behaviors that humans must teach to machine language applications can become neglected.

SEE:Telemedicine, AI, and deep learning are revolutionizing healthcare (free PDF)(TechRepublic)

How can organizations ensure that the time and talent they put into their big data, AI, and ML applications remain relevant?

Most organizations acknowledge that collaboration between data science, IT, and end users is important, but they don't necessarily follow through. Effective collaboration between departments depends on clearly articulated policies and procedures that everyone adheres to in the areas of data preparation, compliance, speed to market, and learning for ML.

Companies often fail to establish regular intervals for updating logic and data for big data, AI, and ML applications in the field. The learning update cycle should be continuous--it's the only way you can assure concurrency between your algorithms and the world in which they operate.

Like their transaction system counterparts, there will come a time when some AI and ML applications will have seen their day. This is the end of their life cycles, and the appropriate thing to do is retire them.

If you can automate some of your life cycle maintenance functions for big data, AI, and ML, do so. Automation software can automate handoffs between data science IT and production. It makes the process of deployment that much easier.

Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays

Originally posted here:
4 ways to fine-tune your AI and machine learning deployments - TechRepublic

AI-powered honeypots: Machine learning may help improve intrusion detection – The Daily Swig

John Leyden09 March 2020 at 15:50 UTC Updated: 09 March 2020 at 16:04 UTC

Forget crowdsourcing, heres crooksourcing

Computer scientists in the US are working to apply machine learning techniques in order to develop more effective honeypot-style cyber defenses.

So-called deception technology refers to traps or decoy systems that are strategically placed around networks.

These decoy systems are designed to act as a honeypot so that once an attacker has penetrated a network, they will attempt to attack them setting off security alerts in the process.

Deception technology is not a new concept. Companies including Illusive Networks and Attivo have been working in the field for several years.

Now, however, researchers from the University of Texas at Dallas (UT Dallas) are aiming to take the concept one step further.

The DeepDig (DEcEPtion DIGging) technique plants traps and decoys onto real systems before applying machine learning techniques in order to gain a deeper understanding of attackers behavior.

The technique is designed to use cyber-attacks as free sources of live training data for machine learning-based intrusion detection systems.

Somewhat ironically, the prototype technology enlists attackers as free penetration testers.

Dr Kevin Hamlen, endowed professor of computer science at UT Dallas, explained: Companies like Illusive Networks, Attivo, and many others create network topologies intended to be confusing to adversaries, making it harder for them to find real assets to attack.

The shortcoming of existing approaches, Dr Hamlen, told The Daily Swig is that such deceptions do not learn from attacks.

While the defense remains relatively static, the adversary learns over time how to distinguish honeypots from a real asset, leading to an asymmetric game that the adversary eventually wins with high probability, he said.

In contrast, DeepDig turns real assets into traps that learn from attacks using artificial intelligence and data mining.

Turning real assets into a form of honeypot has numerous advantages, according to Dr Hamlen.

Even the most skilled adversary cannot avoid interacting with the trap because the trap is within the real asset that is the adversary's target, not a separate machine or software process, he said.

This leads to a symmetric game in which the defense continually learns and gets better at stopping even the most stealthy adversaries.

The research which has applications in the field of web security was presented in a paper (PDF) entitled Improving Intrusion Detectors by Crook-Sourcing, at the recent Computer Security Applications Conference in Puerto Rico.

The research was funded by the US federal government. The algorithms and evaluation data developed so far have been publicly released to accompany the research paper.

Its hoped that the research might eventually find its way into commercially available products, but this is still some time off and the technology is still only at the prototype stage.

In practice, companies typically partner with a university that conducted the research theyre interested in to build a full product, a UT Dallas spokesman explained. Dr Hamlens project is not yet at that stage.

RELATED Gold-nuggeting: Machine learning tool simplifies target discovery for pen testers

Read more from the original source:
AI-powered honeypots: Machine learning may help improve intrusion detection - The Daily Swig

The Connection Between Astrology And Your Tesla AutoDrive – Forbes

Preamble: Intermittently, I will be introducing some columns which introduce some seemingly outlandish concepts. The purpose is a bit of humor, but also to provoke some thought. Enjoy.

Zodiac signs inside of horoscope circle.

Historically, astrology has been a major component of the cultural life in many major civilizations. Significant events such as marriage, moving into a new home, or even travel were planned with astrology in mind. Even in modern times, astrological internet sites enjoy great success and the gurus of the art publish in major newspapers.

Of course, with the advent of scientific methods and formal education, astrology has rapidly lost favor in intellectual society. After all, what could possibly be the causal relationship between the movement of planets and whether someone will get a job promotion? As some have pointed out, even if there was a relationship, the configuration of the stars change, so how could the predictions of the past possibly be valid ?

Pure poppycock. Right? Perhaps. Lets take a deeper look.

Lets consider the central technology at the apex of current intellectual achievement : machine learning. Machine learning is the engine underlying important technologies such as autonomous vehicles including Teslas AutoDrive. What is machine learning at its core? One looks at massive amounts of data and trains a computational engine (ML engine). This ML engine is then used to make future predictions. Sometimes, the training is done in a constrained manner where one looks at particular items, and other times, the training is left unconstrained. Machine learning and the associated field of Artificial Intelligence (AI) is at the forefront of computer science research. Indeed, as we have discussed in past articles, AI is considered to be the next big economic mega-driver in a vast number of markets. After looking at machine learning, an interesting thought comes to mind.

Was astrology really just machine learning done by humans?

Could the thought leaders from great civilizations have looked at large amounts of human behavioral data and used something very reasonable (planetary movements) to train the astrology engine? After all, what really is the difference between machine learning and astrology?

Marketing Chart Comparing Astrology and Machine Learning

Both astrology and machine learning seem to have a concept of training. In astrology, the astrological signs are used as points of interest, and seemingly arbitrary connections are made to individual human circumstances. Even without the understanding of causality, the correlations can be somewhat true. In machine learning, data correlations are discovered, and there is no requirement of causation. This thought process is central to the machine learning paradigm, and gives it much of its power. In fact, as the chart above shows, there are uncomfortable levels of parallels between astrology and machine learning.

What does this mean? Should we take machine learning a little less seriously? Certainly, some caution is warranted, but it appears to be clear that machine learning can provide utility.

So, what about astrology? Perhaps we should take it a bit more seriously .

If you enjoyed this article, you may also enjoy A Better Transportation Option Than A Tesla.

Read the original post:
The Connection Between Astrology And Your Tesla AutoDrive - Forbes

If AI’s So Smart, Why Can’t It Grasp Cause and Effect? – WIRED

Heres a troubling fact. A self-driving car hurtling along the highway and weaving through traffic has less understanding of what might cause an accident than a child whos just learning to walk.

A new experiment shows how difficult it is for even the best artificial intelligence systems to grasp rudimentary physics and cause and effect. It also offers a path for building AI systems that can learn why things happen.

The experiment was designed to push beyond just pattern recognition, says Josh Tenenbaum, a professor at MITs Center for Brains Minds & Machines, who who worked on the project with Chuang Gan, a researcher at MIT, and Kexin Yi, a PhD student at Harvard. Big tech companies would love to have systems that can do this kind of thing.

The most popular cutting-edge AI technique, deep learning, has delivered some stunning advances in recent years, fueling excitement about the potential of AI. It involves feeding a large approximation of a neural network copious amounts of training data. Deep-learning algorithms can often spot patterns in data beautifully, enabling impressive feats of image and voice recognition. But they lack other capabilities that are trivial for humans.

To demonstrate the shortcoming, Tenenbaum and his collaborators built a kind of intelligence test for AI systems. It involves showing an AI program a simple virtual world filled with a few moving objects, together with questions and answers about the scene and whats going on. The questions and answers are labeled, similar to how an AI system learns to recognize a cat by being shown hundreds of images labeled cat.

Systems that use advanced machine learning exhibited a big blind spot. Asked a descriptive question such as What color is this object? a cutting-edge AI algorithm will get it right more than 90 percent of the time. But when posed more complex questions about the scene, such as What caused the ball to collide with the cube? or What would have happened if the objects had not collided? the same system answers correctly only about 10 percent of the time.

Supersmart algorithms won't take all the jobs, But they are learning faster than ever, doing everything from medical diagnostics to serving up ads.

David Cox, IBM director of the MIT-IBM Watson AI Lab, which was involved with the work, says understanding causality is fundamentally important for AI. We as humans have the ability to reason about cause and effect, and we need to have AI systems that can do the same.

A lack of causal understanding can have real consequences, too. Industrial robots can increasingly sense nearby objects, in order to grasp or move them. But they don't know that hitting something will cause it to fall over or break unless theyve been specifically programmedand its impossible to predict every possible scenario.

If a robot could reason causally, however, it might be able to avoid problems it hasnt been programmed to understand. The same is true for a self-driving car. It could instinctively know that if a truck were to swerve and hit a barrier, its load could spill onto the road.

Causal reasoning would be useful for just about any AI system. Systems trained on medical information rather than 3-D scenes need to understand the cause of disease and the likely result of possible interventions. Causal reasoning is of growing interest to many prominent figures in AI. All of this is driving towards AI systems that can not only learn but also reason, Cox says.

The test devised by Tenenbaum is important, says Kun Zhang, an assistant professor who works on causal inference and machine learning at Carnegie Mellon University, because it provides a good way to measure causal understanding, albeit in a very limited setting. The development of more-general-purpose AI systems will greatly benefit from methods for causal inference and representation learning, he says.

Excerpt from:
If AI's So Smart, Why Can't It Grasp Cause and Effect? - WIRED

Tip: Machine learning solutions for journalists | Tip of the day – Journalism.co.uk

Much has been said about what artificial intelligence and machine learning can do for journalism: from understanding human ethics to predicting when readers are about to cancel their subscriptions.

Want to get hands on with machine learning? Quartz investigative editor John Keefe provides 15 video lessons taken from the 'Hands-on Machine Learning Solutions for Journalists' online class he lead through the Knight Center for Journalism in the Americas. It covers all the techniques that the Quartz investigative team and AI studio commonly use in their journalism.

"Machine learning is particularly good at finding patterns and that can be useful to you when you're trying to search through text documents or lots of images," Keefe explained in the introduction video.

Want to learn more about using artificial intelligence in your newsroom? Join us on the 4 June 2020 at our digital journalism conference Newsrewired at MediaCityUK, which will feature a workshop on implementing artificial intelligence into everyday journalistic work. Visit newsrewired.com for the full agenda and tickets

If you like our news and feature articles, you can sign up to receive our free daily (Mon-Fri) email newsletter (mobile friendly).

Visit link:
Tip: Machine learning solutions for journalists | Tip of the day - Journalism.co.uk