Chelsea Manning showed moral strength by choosing imprisonment over collaboration with US govt Snowden – RT

Chelsea Mannings decision to sit in jail rather than cooperate with the US governments prosecution of WikiLeaks is a testament to her character and unwavering principles, NSA whistleblower Edward Snowden has said.

Commenting on Mannings newly-won freedom, Snowden noted that the former Army analyst-turned-whistleblower had been cast into a dungeon by the United States for refusing to work with the government to criminalize the publication of classified materials.

They offered to let her out in exchange for collaboration, but she chose her principles instead.

For Snowden, Mannings unwillingness to exchange her freedom for her beliefs was the ultimate display of moral strength.

Manning was released on Thursday after spending nearly a year in detention for refusing to cooperate with a federal grand jury probe into WikiLeaks. Her release order came shortly after her legal team disclosed that she had been hospitalized after attempting to take her own life.Although no longer locked away in a Virginia detention facility, Manning still faces more than $250,000 in fines for refusing to cooperate with the inquiry.

The ex-army analyst became a household name after leaking hundreds of thousands of documents and files related to the US wars in Iraq and Afghanistan. She was found guilty in 2013 of espionage, and spent four years in prison before her sentence was commuted in 2017.

The decision to release Manning coincides with another legal battle: WikiLeaks co-founder Julian Assange is currently fighting extradition to the United States. The journalist could spend the rest of his life in a US prison if the UK court rules against him.

Like this story? Share it with a friend!

See the original post:
Chelsea Manning showed moral strength by choosing imprisonment over collaboration with US govt Snowden - RT

ServiceNow pulls on its platforms, talks up machine learning, analytics in biggest release since ex-SAP boss took reins – The Register

As is the way with the 21st century, IT companies are apt to get meta and ServiceNow is no exception.

In its biggest product release since the arrival of SAP revenue-boosting Bill McDermott as new CEO, the cloudy business process company is positioning itself as the "platform of platforms". Which goes to show, if nothing else, that platformization also applies to platforms.

To avoid plunging into an Escher-eque tailspin of abstraction, it is best to look at what Now Platform Orlando actually does and who, if anyone, it might help.

The idea is that ServiceNow's tools make routine business activity much easier and slicker. To this the company is adding intelligence, analytics and AI, it said.

Take the arrival of a new employee. They might need to be set up on HR and payroll systems, get access to IT equipment and applications, have facilities management give them the right desk and workspace, be given building security access and perhaps have to sign some legal documents.

Rather than multiple people doing each of these tasks with different IT systems, ServiceNow will make one poor soul do it using its single platform, which accesses all the other prerequisite applications, said David Flesh, ServiceNow product marketing director.

It is also chucking chatbots at that luckless staffer. In January, ServiceNow bought Passage AI, a startup that helps customers build chatbots in multiple languages. It is using this technology to create virtual assistants to help with some of the most common requests that hit HR and IT service desks, for example password resets, getting assess to Wi-Fi, that kind of thing.

This can also mean staffers don't have to worry where they send requests, meaning if, for example, they've just found out they're going to become a parent, they can fire questions at an agent rather than HR, their boss or the finance team. The firm said: "Agents are a great way for employees find information and abstracts that organizational complexity."

ServiceNow has also introduced machine learning, for example, in IT operations management, which uses systems data to identify when a service is degrading and what could be causing the problem. "You get more specific information about the cause and suggested actions to take to actually remediate the problem," Flesh said.

Customers looking to use this feature will still have to train the machine learning models on historic datasets from their operations and validate models, as per the usual ML pipeline. But ServiceNow makes the process more graphical, and comes with its knowledge of common predictors of operational problems.

Lastly, analytics is a new feature in the update. Users can include key performance indicators in the workflows they create, and the platform includes the tools to track and analyse those KPIs and suggest how to improve performance. It also suggests useful KPIs.

Another application of the analytics tools is for IT teams - traditionally the company's core users - monitoring cloud services. ServiceNow said it helps optimise organisations' cloud usage by "making intelligent recommendations on managing usage across business hours, choosing the right resources and enforcing usage policies".

With McDermott's arrival and a slew of new features and customer references, ServiceNow is getting a lot of attention, but many of these technologies exist in other products.

There are independent robotic process automation (RPA) vendors who build automation into common tasks, while application vendors are also introducing automation within their own environments. But as application and platform upgrade cycles are sluggish, and RPA has proved difficult to scale, ServiceNow may find a receptive audience for its, er, platform of platforms.

Sponsored: Practical tips for Office 365 tenant-to-tenant migration

Visit link:
ServiceNow pulls on its platforms, talks up machine learning, analytics in biggest release since ex-SAP boss took reins - The Register

Skill up for the digital future with India’s #1 Machine Learning Lab and AI Research Center – Inventiva

Every tech professional today, irrespective of their role in the organisation, needs to be AI/ML-ready to compete in the new world order. In keeping with the current and future demand for professionals with expertise in AI and Machine Learning (ML), and to help build a holistic understanding of the subject, IIIT Hyderabad, in association with TalentSprint, an ed-tech platform, is offering an AI/ML Executive Certification Program for working professionals in India and abroad.

The programme is designed for working professionals in a 13-week format that involves masterclass lectures, hands-on labs, mentorship, hackathons, and workshops to ensure fast-track learning. The programme is conducted in Hyderabad to enable a wider audience to benefit from the expertise of IIIT Hyderabads Machine Learning Lab.

The programme has successfully completed 11 cohorts with 2200+ participants who are currently working with more than 600 top companies.

You can apply for the 14th cohort here

Participants will get access to in-person classes every weekend. This enables professionals from in and around Hyderabad to build AI/ML expertise from Indias top Machine Learning Lab at IIIT Hyderabad.

With a balanced mix of lectures and labs, the programme will also host hackathons, group labs, and workshops. Participants will also get assistance from mentors throughout the programme. The programmes Hackathons, Group Labs, and Workshops also enable participants to work in teams of exceptional peer groups. Moreover, the lectures are delivered by world class faculty and industry experts.

Refresh your knowledge on coding and the mathematics necessary for building expertise in AI/ML

Learn to translate real-world problems into AI/ML abstractions

Learn about and apply standard AI/ML algorithms to create AI/ML applications

Implement practical solutions using Deep Learning Techniques and Toolchains

Participate in industry projects and hackathons

While there are a number of courses on offer in this domain, what makes this AI/ML Executive Certification Program stand out is the fact that it is offered by Indias No. 1 Machine Learning Lab at IIIT Hyderabad. The programme follows a unique 5-step learning process to ensure fast-track learning: Masterclass Lectures, Hands-on Labs, Mentorship, Hackathons and Workshops. Moreover, participants also get a chance to learn and collaborate with leading people from academia, industry and global bluechip Institutions.

The institute has been the torch bearer of research for several years. It hosts the Kohli Center (KCIS), Indias leading center on intelligent systems. KCISs research was featured in 600 publications and has received 5,792 citations in academic publications. It also hosts the Center for Visual Information Technology (CVIT) that focuses on basic and advanced research in Image Processing Computer Vision, Computer Graphics and Machine Learnin

Tech professionals with at least one year work experience and coding background are encouraged to apply. The programme is especially beneficial for business leaders, CXOs, project managers/developers, analysts and developers. Applications for the 14th cohort are closing on March 20. Apply today!

Go here to read the rest:
Skill up for the digital future with India's #1 Machine Learning Lab and AI Research Center - Inventiva

AI and machine learning is not the future, it’s the present – Eyes on APAC – ComputerWeekly.com

This is a guest post by Raju Vegesna, chief evangelist at Zoho

For many, artificial intelligence (AI) is a distant and incomprehensible concept associated only with science fiction movies or high-tech laboratories.

In reality, however, AI and machine learning is already changing the world we know. From TVs and toothbrushes to real-time digital avatars that interact with humans, the recent CES show demonstrated how widespread AI is becoming in everyday life.

The same can be said of the business community, with the latest Gartner research revealing that 37% of organisations had implemented some form of AI or machine learning.

So far, these technologies have largely been adopted and implemented more by larger organisations with the resources and expertise to seamlessly integrate them into their business. But technology has evolved significantly in recent years, and SaaS (software as a service) providers now offer integrated technology and AI that meets the needs and budgets of small and medium businesses too.

Here are a few evolving trends in AI and machine learning that businesses of all sizes could capitalise on in 2020 and beyond.

The enterprise software marketplace is expanding rapidly. More vendors are entering the market, often with a growing range of solutions, which creates confusion for early adopters of the technology. Integrating new technologies from a range of different vendors can be challenging, even for large enterprise organisations.

So, in 2020 and beyond, the businesses that will make the most of AI and machine learning are the ones implementing single-vendor technology platforms. Its a challenge to work with data that is scattered across different applications using different data models, but organisations that consolidate all its data in one integrated platform will find it much easier to feed it into a machine learning algorithm.

After all, the more data thats available, the more powerful your AI and machine learning models will be. By capitalising on the wealth of data supplied by integrated software platforms, advanced business applications will be able to answer our questions or help us navigate interfaces. If youre a business owner, planning to utilise AI and machine learning for your business in 2020, then the single-vendor strategy is the way to go.

Technology has advanced at such a rate that businesses no longer need to compromise to fit the technology. This type of hyper-personalisation increases productivity for business software users and will continue to be a prime focus for businesses in 2020.

Take, for example, the rise of algorithmic social media timelines we have seen in the last few years. For marketers, AI and machine learning mean personalisation is becoming more and more sophisticated, allowing businesses to supercharge and sharpen their focus on their customers. Companies which capture insights to create personalised customer experiences and accelerate sales will likely win in 2020.

With AI and machine learning, vast amounts of data is processed every second of the day. In 2020, one of the significant challenges faced by companies implementing AI and machine learning is data cleansing the process of detecting, correcting or removing corrupt or inaccurate records from a data set.

Smaller organisations can begin to expect AI functionality in everyday software like spreadsheets, where theyll be able to parse information out of addresses or clean up inconsistencies. Larger organisations, meanwhile, will benefit from AI that ensures their data is more consumable for analytics or prepares it for migration from one application to another.

Businesses can thrive with the right content and strategic, innovative marketing. Consider auto-tagging, which could soon become the norm. Smartphones can recognise and tag objects in your photos, making your photo library much more searchable. Well start to see business applications auto-tag information to make it much more accessible.

Thanks to AI, customer relationship management (CRM) systems will continue to be a fantastic and always-advancing channel through which businesses can market to their customers. Today, businesses can find its top customers in a CRM system by running a report and sorting by revenue or sales. In the coming years, businesses will be able to search top customers, and its CRM system will know what theyre looking for.

With changing industry trends and demands, its important for all businesses to use the latest technology to create a positive impact on its operations. In 2020 and beyond, AI and machine learning will support businesses by helping them reduce manual labour and enhance productivity.

While some businesses, particularly small businesses, might be apprehensive about AI, it is a transformation that is bound to bring along a paradigm shift for those that are ready to take a big step towards a technology-driven future.

See original here:
AI and machine learning is not the future, it's the present - Eyes on APAC - ComputerWeekly.com

Navigating the New Landscape of AI Platforms – Harvard Business Review

Executive Summary

What only insiders generally know is that data scientists, once hired, spend more time building and maintaining the tooling for AI systems than they do building the AI systems themselves. Now, though, new tools are emerging to ease the entry into this era of technological innovation. Unified platforms that bring the work of collecting, labelling, and feeding data into supervised learning models, or that help build the models themselves, promise to standardize workflows in the way that Salesforce and Hubspot have for managing customer relationships. Some of these platforms automate complex tasks using integrated machine-learning algorithms, making the work easier still. This frees up data scientists to spend time building the actual structures they were hired to create, and puts AI within reach of even small- and medium-sized companies.

Nearly two years ago, Seattle Sport Sciences, a company that provides data to soccer club executives, coaches, trainers and players to improve training, made a hard turn into AI. It began developing a system that tracks ball physics and player movements from video feeds. To build it, the company needed to label millions of video frames to teach computer algorithms what to look for. It started out by hiring a small team to sit in front of computer screens, identifying players and balls on each frame. But it quickly realized that it needed a software platform in order to scale. Soon, its expensive data science team was spending most of its time building a platform to handle massive amounts of data.

These are heady days when every CEO can see or at least sense opportunities for machine-learning systems to transform their business. Nearly every company has processes suited for machine learning, which is really just a way of teaching computers to recognize patterns and make decisions based on those patterns, often faster and more accurately than humans. Is that a dog on the road in front of me? Apply the brakes. Is that a tumor on that X-ray? Alert the doctor. Is that a weed in the field? Spray it with herbicide.

What only insiders generally know is that data scientists, once hired, spend more time building and maintaining the tools for AI systems than they do building the systems themselves. A recent survey of 500 companies by the firm Algorithmia found that expensive teams spend less than a quarter of their time training and iterating machine-learning models, which is their primary job function.

Now, though, new tools are emerging to ease the entry into this era of technological innovation. Unified platforms that bring the work of collecting, labelling and feeding data into supervised learning models, or that help build the models themselves, promise to standardize workflows in the way that Salesforce and Hubspot have for managing customer relationships. Some of these platforms automate complex tasks using integrated machine-learning algorithms, making the work easier still. This frees up data scientists to spend time building the actual structures they were hired to create, and puts AI within reach of even small- and medium-sized companies, like Seattle Sports Science.

Frustrated that its data science team was spinning its wheels, Seattle Sports Sciences AI architect John Milton finally found a commercial solution that did the job. I wish I had realized that we needed those tools, said Milton. He hadnt factored the infrastructure into their original budget and having to go back to senior management and ask for it wasnt a pleasant experience for anyone.

The AI giants, Google, Amazon, Microsoft and Apple, among others, have steadily released tools to the public, many of them free, including vast libraries of code that engineers can compile into deep-learning models. Facebooks powerful object-recognition tool, Detectron, has become one of the most widely adopted open-source projects since its release in 2018. But using those tools can still be a challenge, because they dont necessarily work together. This means data science teams have to build connections between each tool to get them to do the job a company needs.

The newest leap on the horizon addresses this pain point. New platforms are now allowing engineers to plug in components without worrying about the connections.

For example, Determined AI and Paperspace sell platforms for managing the machine-learning workflow. Determined AIs platform includes automated elements to help data scientists find the best architecture for neural networks, while Paperspace comes with access to dedicated GPUs in the cloud.

If companies dont have access to a unified platform, theyre saying, Heres this open source thing that does hyperparameter tuning. Heres this other thing that does distributed training, and they are literally gluing them all together, said Evan Sparks, cofounder of Determined AI. The way theyre doing it is really with duct tape.

Labelbox is a training data platform, or TDP, for managing the labeling of data so that data science teams can work efficiently with annotation teams across the globe. (The author of this article is the companys co-founder.) It gives companies the ability to track their data, spot, and fix bias in the data and optimize the quality of their training data before feeding it into their machine-learning models.

Its the solution that Seattle Sports Sciences uses. John Deere uses the platform to label images of individual plants, so that smart tractors can spot weeds and deliver pesticide precisely, saving money and sparing the environment unnecessary chemicals.

Meanwhile, companies no longer need to hire experienced researchers to write machine-learning algorithms, the steam engines of today. They can find them for free or license them from companies who have solved similar problems before.

Algorithmia, which helps companies deploy, serve and scale their machine-learning models, operates an algorithm marketplace so data science teams dont duplicate other peoples effort by building their own. Users can search through the 7,000 different algorithms on the companys platform and license one or upload their own.

Companies can even buy complete off-the-shelf deep learning models ready for implementation.

Fritz.ai, for example, offers a number of pre-trained models that can detect objects in videos or transfer artwork styles from one image to another all of which run locally on mobile devices. The companys premium services include creating custom models and more automation features for managing and tweaking models.

And while companies can use a TDP to label training data, they can also find pre-labeled datasets, many for free, that are general enough to solve many problems.

Soon, companies will even offer machine-learning as a service: Customers will simply upload data and an objective and be able to access a trained model through an API.

In the late 18th century, Maudslays lathe led to standardized screw threads and, in turn, to interchangeable parts, which spread the industrial revolution far and wide. Machine-learning tools will do the same for AI, and, as a result of these advances, companies are able to implement machine-learning with fewer data scientists and less senior data science teams. Thats important given the looming machine-learning, human resources crunch: According to a 2019 Dun & Bradstreet report, 40 percent of respondents from Forbes Global 2000 organizations say they are adding more AI-related jobs. And the number of AI-related job listings on the recruitment portal Indeed.com jumped 29 percent from May 2018 to May 2019. Most of that demand is for supervised-learning engineers.

But C-suite executives need to understand the need for those tools and budget accordingly. Just as Seattle Sports Sciences learned, its better to familiarize yourself with the full machine-learning workflow and identify necessary tooling before embarking on a project.

That tooling can be expensive, whether the decision is to build or to buy. As is often the case with key business infrastructure, there are hidden costs to building. Buying a solution might look more expensive up front, but it is often cheaper in the long run.

Once youve identified the necessary infrastructure, survey the market to see what solutions are out there and build the cost of that infrastructure into your budget. Dont fall for a hard sell. The industry is young, both in terms of the time that its been around and the age of its entrepreneurs. The ones who are in it out of passion are idealistic and mission driven. They believe they are democratizing an incredibly powerful new technology.

The AI tooling industry is facing more than enough demand. If you sense someone is chasing dollars, be wary. The serious players are eager to share their knowledge and help guide business leaders toward success. Successes benefit everyone.

Follow this link:
Navigating the New Landscape of AI Platforms - Harvard Business Review

AI could help with the next pandemicbut not with this one – MIT Technology Review

It was an AI that first saw it coming, or so the story goes. On December 30, an artificial-intelligence company called BlueDot, which uses machine learning to monitor outbreaks of infectious diseases around the world, alerted clientsincluding various governments, hospitals, and businessesto an unusual bump in pneumonia cases in Wuhan, China. It would be another nine days before the World Health Organization officially flagged what weve all come to know as Covid-19.

BlueDot wasnt alone. An automated service called HealthMap at Boston Childrens Hospital also caught those first signs. As did a model run by Metabiota, based in San Francisco. That AI could spot an outbreak on the other side of the world is pretty amazing, and early warnings save lives.

You can read all of ourcoverage of the coronavirus/Covid-19 outbreakfor free, and also sign up for ourcoronavirus newsletter. But pleaseconsider subscribingto support our nonprofit journalism..

But how much has AI really helped in tackling the current outbreak? Thats a hard question to answer. Companies like BlueDot are typically tight-lipped about exactly who they provide information to and how it is used. And human teams say they spotted the outbreak the same day as the AIs. Other projects in which AI is being explored as a diagnostic tool or used to help find a vaccine are still in their very early stages. Even if they are successful, it will take timepossibly monthsto get those innovations into the hands of the health-care workers who need them.

The hype outstrips the reality. In fact, the narrative that has appeared in many news reports and breathless press releasesthat AI is a powerful new weapon against diseasesis only partly true and risks becoming counterproductive. For example, too much confidence in AIs capabilities could lead to ill-informed decisions that funnel public money to unproven AI companies at the expense of proven interventions such as drug programs. Its also bad for the field itself: overblown but disappointed expectations have led to a crash of interest in AI, and consequent loss of funding, more than once in the past.

So heres a reality check: AI will not save us from the coronaviruscertainly not this time. But theres every chance it will play a bigger role in future epidemicsif we make some big changes. Most wont be easy. Some we wont like.

There are three main areas where AI could help: prediction, diagnosis, and treatment.

Prediction

Companies like BlueDot and Metabiota use a range of natural-language processing (NLP) algorithms to monitor news outlets and official health-care reports in different languages around the world, flagging whether they mention high-priority diseases, such as coronavirus, or more endemic ones, such as HIV or tuberculosis. Their predictive tools can also draw on air-travel data to assess the risk that transit hubs might see infected people either arriving or departing.

The results are reasonably accurate. For example, Metabiotas latest public report, on February 25, predicted that on March 3 there would be 127,000 cumulative cases worldwide. It overshot by around 30,000, but Mark Gallivan, the firms director of data science, says this is still well within the margin of error. It also listed the countries most likely to report new cases, including China, Italy, Iran, and the US. Again: not bad.

Sign up for The Algorithm artificial intelligence, demystified

Others keep an eye on social media too. Stratifyd, a data analytics company based in Charlotte, North Carolina, is developing an AI that scans posts on sites like Facebook and Twitter and cross-references them with descriptions of diseases taken from sources such as the National Institutes of Health, the World Organisation for Animal Health, and the global microbial identifier database, which stores genome sequencing information.

Work by these companies is certainly impressive. And it goes to show how far machine learning has advanced in recent years. A few years ago Google tried to predict outbreaks with its ill-fated Flu Tracker, which was shelved in 2013 when it failed to predict that years flu spike. What changed? It mostly comes down to the ability of the latest software to listen in on a much wider range of sources.

Unsupervised machine learning is also key. Letting an AI identify its own patterns in the noise, rather than training it on preselected examples, highlights things you might not have thought to look for. When you do prediction, you're looking for new behavior, says Stratifyds CEO, Derek Wang.

But what do you do with these predictions? The initial prediction by BlueDot correctly pinpointed a handful of cities in the viruss path. This could have let authorities prepare, alerting hospitals and putting containment measures in place. But as the scale of the epidemic grows, predictions become less specific. Metabiotas warning that certain countries would be affected in the following week might have been correct, but it is hard to know what to do with that information.

Whats more, all these approaches will become less accurate as the epidemic progresses, largely because reliable data of the sort that AI needs to feed on has been hard to get about Covid-19. News sources and official reports offer inconsistent accounts. There has been confusion over symptoms and how the virus passes between people. The media may play things up; authorities may play things down. And predicting where a disease may spread from hundreds of sites in dozens of countries is a far more daunting task than making a call on where a single outbreak might spread in its first few days. Noise is always the enemy of machine-learning algorithms, says Wang. Indeed, Gallivan acknowledges that Metabiotas daily predictions were easier to make in the first two weeks or so.

One of the biggest obstacles is the lack of diagnostic testing, says Gallivan. Ideally, we would have a test to detect the novel coronavirus immediately and be testing everyone at least once a day, he says. We also dont really know what behaviors people are adoptingwho is working from home, who is self-quarantining, who is or isnt washing handsor what effect it might be having. If you want to predict whats going to happen next, you need an accurate picture of whats happening right now.

Its not clear whats going on inside hospitals, either. Ahmer Inam at Pactera Edge, a data and AI consultancy, says prediction tools would be a lot better if public health data wasnt locked away within government agencies as it is in many countries, including the US. This means an AI must lean more heavily on readily available data like online news. By the time the media picks up on a potentially new medical condition, it is already too late, he says.

But if AI needs much more data from reliable sources to be useful in this area, strategies for getting it can be controversial. Several people I spoke to highlighted this uncomfortable trade-off: to get better predictions from machine learning, we need to share more of our personal data with companies and governments.

Darren Schulte, an MD and CEO of Apixio, which has built an AI to extract information from patients records, thinks that medical records from across the US should be opened up for data analysis. This could allow an AI to automatically identify individuals who are most at risk from Covid-19 because of an underlying condition. Resources could then be focused on those people who need them most. The technology to read patient records and extract life-saving information exists, says Schulte. The problem is that these records are split across multiple databases and managed by different health services, which makes them harder to analyze. Id like to drop my AI into this big ocean of data, he says. But our data sits in small lakes, not a big ocean.

Health data should also be shared between countries, says Inam: Viruses dont operate within the confines of geopolitical boundaries. He thinks countries should be forced by international agreement to release real-time data on diagnoses and hospital admissions, which could then be fed into global-scale machine-learning models of a pandemic.

Of course, this may be wishful thinking. Different parts of the world have different privacy regulations for medical data. And many of us already balk at making our data accessible to third parties. New data-processing techniques, such as differential privacy and training on synthetic data rather than real data, might offer a way through this debate. But this technology is still being finessed. Finding agreement on international standards will take even more time.

For now, we must make the most of what data we have. Wangs answer is to make sure humans are around to interpret what machine-learning models spit out, making sure to discard predictions that dont ring true. If one is overly optimistic or reliant on a fully autonomous predictive model, it will prove problematic, he says. AIs can find hidden signals in the data, but humans must connect the dots.

Early diagnosis

As well as predicting the course of an epidemic, many hope that AI will help identify people who have been infected. AI has a proven track record here. Machine-learning models for examining medical images can catch early signs of disease that human doctors miss, from eye disease to heart conditions to cancer. But these models typically require a lot of data to learn from.

A handful of preprint papers have been posted online in the last few weeks suggesting that machine learning can diagnose Covid-19 from CT scans of lung tissue if trained to spot telltale signs of the disease in the images. Alexander Selvikvg Lundervold at the Western Norway University of Applied Sciences in Bergen, Norway, who is an expert on machine learning and medical imaging, says we should expect AI to be able to detect signs of Covid-19 in patients eventually. But it is unclear whether imaging is the way to go. For one thing, physical signs of the disease may not show up in scans until some time after infection, making it not very useful as an early diagnostic.

AP Images

Whats more, since so little training data is available so far, its hard to assess the accuracy of the approaches posted online. Most image recognition systemsincluding those trained on medical imagesare adapted from models first trained on ImageNet, a widely used data set encompassing millions of everyday images. To classify something simple that's close to ImageNet data, such as images of dogs and cats, can be done with very little data, says Lundervold. Subtle findings in medical images, not so much.

Thats not to say it wont happenand AI tools could potentially be built to detect early stages of disease in future outbreaks. But we should be skeptical about many of the claims of AI doctors diagnosing Covid-19 today. Again, sharing more patient data will help, and so will machine-learning techniques that allow models to be trained even when little data is available. For example, few-shot learning, where an AI can learn patterns from only a handful of results, and transfer learning, where an AI already trained to do one thing can be quickly adapted to do something similar, are promising advancesbut still works in progress.

Cure-all

Data is also essential if AI is to help develop treatments for the disease. One technique for identifying possible drug candidates is to use generative design algorithms, which produce a vast number of potential results and then sift through them to highlight those that are worth looking at more closely. This technique can be used to quickly search through millions of biological or molecular structures, for example.

SRI International is collaborating on such an AI tool, which uses deep learning to generate many novel drug candidates that scientists can then assess for efficacy. This is a game-changer for drug discovery, but it can still take many months before a promising candidate becomes a viable treatment.

In theory, AIs could be used to predict the evolution of the coronavirus too. Inam imagines running unsupervised learning algorithms to simulate all possible evolution paths. You could then add potential vaccines to the mix and see if the viruses mutate to develop resistance. This will allow virologists to be a few steps ahead of the viruses and create vaccines in case any of these doomsday mutations occur, he says.

Its an exciting possibility, but a far-off one. We dont yet have enough information about how the virus mutates to be able to simulate it this time around.

In the meantime, the ultimate barrier may be the people in charge. What Id most like to change is the relationship between policymakers and AI, says Wang. AI will not be able to predict disease outbreaks by itself, no matter how much data it gets. Getting leaders in government, businesses, and health care to trust these tools will fundamentally change how quickly we can react to disease outbreaks, he says. But that trust needs to come from a realistic view of what AI can and cannot do nowand what might make it better next time.

Making the most of AI will take a lot of data, time, and smart coordination between many different people. All of which are in short supply right now.

Continued here:
AI could help with the next pandemicbut not with this one - MIT Technology Review

The Impact of Python: How It Could Rule the AI World? – insideBIGDATA

Holdyour head up high! The rise of artificial intelligence (AI) and machinelearning (ML) are poised to bring a new era of civilization and not destroythem.

Yet,theres fear that technology will displace the current workers or tasks, andthats partly true. As predicted by researches, the speed at which AI isreplacing jobs is bound to skyrocket, impacting the jobs of several workerssuch as factory workers, accountants, radiologists, paralegal, and truckers.

Shufflingand transformation of jobs around the workforce are being witnessed, thanks tothe technological epoch.

Buthey, were still far from Terminator.

What can be the odds?

The fear is good, perhaps it is only a matter of time before AI and automation will replace the jobs of millions of tech professionals. A 2018 report by the World Economic Forum suggested that around 75 million jobs will be displaced due to automation and AI in the next five years. The good news is, despite these many jobs will be replaced, at the same time, there will also be a creation of 133 million newer job roles for AI engineers and AI experts.

Simplysaid, within the next five years, there will be near about 58 million newer jobroles in the field of AI.

Insteadof worrying about AI and automation stealing your job, you should beconsidering how you need to reshape your career.

AI and ML in theworkplace: How prepared are you for the impact?

AIand machine learning projects are now leading every industry and sector intothe future of technological advancements. The question is, what are the bestways for you to bring these experiences into reality? What are the programminglanguages that can be used for machine learning and AI?

Thinkahead, you can start by considering Python for machine learning and AI.

But why Python?

Python is the foundational language for AI. However, the projects do differ from a traditional software project, thus, it is necessary to dive deeper into the subject. The crux of building an AI career is by learning Python a programming language that is loved by all because it is both stable and flexible. It is now widely used for machine learning applications and why not, it has become one of the best choices across industries.

Over here, we will list down why Python is the most preferred programming language by AI experts today:

Huge bundle of libraries/frameworks

Itis often a tricky task to choose what best fits while running an ML or an AIalgorithm. It is crucial to have the right set of libraries, a well-structuredenvironment for developers to come up with the best coding solution.

Toease their development timings, most developers rely on Python libraries andframeworks. In a software library, there are already pre-written codes that thedevelopers look up to solve programming challenges. This is where Pythonspre-existing extensive set of libraries play a major role in providing themwith the set of libraries and frameworks to choose from. To name a few are:

With these solutions, it gets easier for the developer to develop your product faster. Even so, the development team needs to waste time finding the libraries that will best suit their project. They can always use an existing library for the implementation of further changes.

Holds a strong community and wide popularity

Accordingto a developer survey Stack Overflow (2018), Python was seen to be among thetop most popular programming language amongst developers. This simply means,for every job that you seek in the job market, AI will always be one of theskillsets that they will look to hire for.

Itis also seen that there are nearly more than 140,000 online repositories thathave custom-built software packages of Python. For instance, Python librariessuch as SciPy, NumPy, and Matplotlib can easily be installed in a program thatruns on Python.

Pythonwas pointed out to be 2019s 8th fastest growing programminglanguage with a growth rate of 151% year on year.

Now, these packages used in machine learning helps AI engineers detect patterns from a large dataset. Pythons popularity is widely known that even Google uses this language to crawl web pages. Pixar, an animation studio uses it to produce movies. Surprisingly, even Spotify uses Python for song recommendation.

Within the past years, Python has managed to grow its community worldwide. You can find multiple platforms and forums where machine learning solutions are shared. For every problem, youve faced youll always find theres already someone who has been through with the same problem. Thus, it is easy to find solutions and guidance through this community.

Platform-independent

This simply means, a programming language or a framework allows developers to implement things on a single machine learning, and the same can be used on another machine learning without further changing anything. The best factor about Python is that it is a language that is platform-independent and is supported by several other platforms such as Windows, macOS, and Linux.

Python code can itself create a standalone program that is executable in most operating systems without even needing a Python interpreter.

Simple and most loved programming language

Python is said to be the simplest and the most consistent programming language offering readable code. While there are complex algorithms that stand along with machine learning, Pythons concise and easy readability allows AI professionals to write easy systems that are reliable. This allows the developers to solve complex machine learning problems instead of dealing with technical issues of the language.

Sofar Python is projected to be the only language that is easy for developers tolearn. Some say Python is intuitive as compared to other programming languages.While others believe, it is due to the number of libraries Python offers thatmakes it suitable for all developers to use.

In conclusion

Pythons power and ease of use has catapulted it to become one of the core languages to provide machine learning solutions. Moreover, AI and ML have been the biggest innovation so far ever since the launch of microchip, developing a career in this realm will pave a way toward the future of tomorrow.

About the Author

Michael Lyam is a writer, AI researcher, business strategist, and top contributor on Medium. He is passionate about technology and is inspired to find new ways to create captivating content. Michaels areas of expertise are: AI, machine learning, data science, and business strategy.

Sign up for the free insideBIGDATAnewsletter.

The rest is here:
The Impact of Python: How It Could Rule the AI World? - insideBIGDATA

AI Is Coming for Your Most Mind-Numbing Office Tasks – WIRED

In 2018, the New York Foundling, a charity that offers child welfare, adoption, and mental health services, was stuck in cut-and-paste hell.

Clinicians and admin staff were spending hours transferring text between different documents and databases to meet varied legal requirements. Arik Hill, the charitys chief information officer, blames the data entry drudgery for an annual staff turnover of 42 percent at the time. We are not a very glamorous industry, says Hill. We are really only just moving on from paper clinical records.

Since then, the New York Foundling has automated much of this grunt work using what are known as software robotssimple programs hand-crafted to perform dull tasks. Often, the programs are built by recording and mimicking a users keystrokes, such as copying a field of text from one database and pasting it into another, eliminating hours of repetitive-stress-inducing work.

It was mind-blowing, says Hill, who says turnover has fallen to 17 percent.

To automate the work, the New York Foundling got help from UiPath, a so-called robotic process automation company. That project didnt require any real machine intelligence.

But in January, UiPath began upgrading its army of software bots to use powerful new artificial intelligence algorithms. It thinks this will let them take on more complex and challenging tasks, such as transcription or sorting images, across more offices. Ultimately, the company hopes software robots will gradually learn how to automate repetitive work for themselves.

In other words, if artificial intelligence is going to disrupt white-collar work, then this may be how it begins.

When paired with robotic process automation, AI significantly expands the number and types of tasks that software robots can perform, says Tom Davenport, a professor who studies information technology and management at Babson College.

Consider a company that needs to summarize long-winded, handwritten notes. AI algorithms that perform character recognition and natural language processing could read the cursive and summarize the text, before a software robot inputs the text into, say, a website. The latest version of UiPaths software includes a range of off-the-shelf machine learning tools. It is also now possible for users to add their own machine learning models to a robotic process.

With all the AI hype, its notable that so little has found its way into modern offices. But the automation that is there, which simply repeats a persons clicking and typing, is still useful. The technology is mostly used by banks, telcos, insurers, and other companies with legacy systems; market researcher Gartner estimates the industry generated roughly $1.3 billion in revenue in 2019.

Supersmart algorithms won't take all the jobs, But they are learning faster than ever, doing everything from medical diagnostics to serving up ads.

Simple software automation is eliminating some particularly repetitive jobs, such as basic data entry, which are often already done overseas. In call centers, fewer people are needed to fill out forms if software can be programmed to open the right documents, find the right fields, and enter text. At the New York Foundling, Hills software allowed him to redirect eight workers to other tasks.

But Davenport says software robots that use AI could displace more jobs, especially if we head into a recession. Companies will use it for substantial headcount and cost reductions, he says.

Erik Brynjolfsson, director of the MIT Initiative on the Digital Economy and the author of several books exploring the impact of technology on the workforce, says robotic process automation will mostly affect middle-skilled office workers, meaning admin work that requires some training.

But it wont happen overnight. He says it took many years for simple software robots, which are essentially descended from screen-scrapers and simple coding tools, to affect office work. The lesson is just how long it takes for even a relatively simple technology to have an impact on business, because of the hard work it takes to implement it reliably in complex environments, Brynjolfsson notes.

Originally posted here:
AI Is Coming for Your Most Mind-Numbing Office Tasks - WIRED

This is how the CDC is trying to forecast coronaviruss spread – MIT Technology Review

Every year the US Centers for Disease Control and Prevention holds a competition to see who can accurately forecast the flu. Research teams around the country vie with different methods, and the best performers win funding and a partnership with the agency to improve the nations preparation for the next season.

Now the agency is tapping several dozen teams to adapt their techniques to forecast the spread of the coronavirus in an effort to make more informed decisions. Among them is a group at Carnegie Mellon University that, over the last five years, has consistently achieved some of the best results. Last year, the group was designated one of two National Centers of Excellence for Influenza Forecasting and asked to lead the design of a community-wide forecasting process.

Roni Rosenfeld, head of the group and of CMUs machine-learning department, admits he was initially reluctant to take on the coronavirus predictions. To a layperson, it doesnt seem as if forecasting the two diseases should be so different, but doing so for the novel outbreak is significantly harder. Rosenfeld worried about whether his predictions would be accurateand, thus, whether they would even be useful. In the end, he was convinced to forge ahead anyway.

People act on the basis of forecasting models, whether they are on paper or in their heads, he says. Youre better off quantifying these estimations so you can discuss them rationally as opposed to making them based on intuition.

Sign up for The Download your daily dose of what's up in emerging technology

The lab uses three methods to pinpoint the rise and fall of cases during flu season. The first is whats known as a nowcasta prediction of the current number of people infected. The lab gathers recent and historical data from the CDC and other partner organizations, including flu-related Google searches, Twitter activity, and web traffic on the CDC, medical sites, and Wikipedia. Those data streams are then fed into machine-learning algorithms to make predictions in real time.

The second and third are both proper forecastsa prediction of whats to come. One is based on machine learning and the other on crowdsourced opinion. Predictions include trends expected up to four weeks ahead, as well as important milestones like when the season will peak and the maximum number of expected cases. Such information helps both the CDC and health-care providers ramp up capacity and prepare in advance.

The machine-learning forecast takes into account the nowcast as well as additional historical data from the CDC. There are 20 years of robust data on flu seasons in the US, providing ample fodder for the algorithms.

In contrast, the crowdsourcing method taps into a group of volunteers. Every week, experts and non-expertswho are found to do just as well with a little participation experienceare asked to log on to an online system and review a chart showing the trajectory of past and current flu seasons. They are then asked to complete the current seasons curve, projecting how many more flu cases there will be over time. Though people dont make very good predictions individually, in aggregate they are often just as good as the machine-learning forecast.

Carnegie Mellon University

Over the years, Rosenfelds team has fine-tuned each of its methods to predict the trajectory of the flu with near-perfect accuracy. At the end of each flu season, the CDC always retroactively updates final numbers, giving the CMU lab a chance to see how their projections stack up. The researchers are now adapting all the techniques for Covid-19, but each will pose distinct challenges.

For the machine-learning- based nowcast, many of the data sources will be the same, but the prediction model will be different. The algorithms will need to learn new correlations between the signals in the data and the ground truth. One reason: theres far greater panic around coronavirus, which causes a completely different pattern of online activity. People will look for coronavirus-related information at much higher rates, even if they feel fine, making it more difficult to tell who may already have symptoms.

In a pandemic situation, there is also very little historical data, which will affect both forecasts. The flu happens on a highly regular cycle each year, while pandemics are erratic and rare. The last pandemicH1N1 in 2009also had very different characteristics, primarily affecting younger rather than elderly populations. The Covid-19 outbreak has been precisely the opposite, with older patients facing the highest risk. On top of that, the surveillance systems for tracking cases werent fully developed back then.

Thats the part that I think is going to be the most challenging, says Rosenfeld, because machine-learning systems, in their nature, learn from examples. Hes hopeful that the crowdsourcing method may be more resilient. On the one hand, little is known about how it will fare in pandemic forecasting. On the other hand, people are actually quite good at adjusting to novel circumstances, he says.

Rosenfelds team is now actively working on ways to make these predictions as good as possible. Flu-testing labs are already beginning to transition to Covid-19 testing and reporting results to the CDC. The CMU lab is also reaching out to other organizations to get as much rich and accurate data as possiblethings like anonymized, aggregated statistics from electronic health records and purchasing patterns for anti-fever medicationto find sharper signals to train its algorithms.

To compensate for the lack of historical data from previous pandemics, the team is relying on older data from the current pandemic. Its looking to incorporate data from countries that were hit earlier and will update its machine-learning models as more accurate data is retroactively posted. At the end of every week, the lab will get a report from the CDC with the most up-to-date trajectory of cases in the US, including revisions on numbers from previous weeks. The lab will then revise its models to close the gaps between the original predictions and the rolling statistics.

Rosenfeld worries about the limitations of these forecasts. There is far more uncertainty than what hes usually comfortable with: for every prediction the lab provides to the CDC, it will include a range of possibilities. We're not going to tell you what's going to happen, he says. What we tell you is what are the things that can happen and how likely is each one of them.

Even after the pandemic is over, the uncertainty wont go away. It will be very difficult to tell how good our methods are, he says. You could be accurate for the wrong reasons. You could be inaccurate for the wrong reasons. Because you have only one season to test it on, you cant really draw any strong, robust conclusions about your methodology.

But in spite of all these challenges, Rosenfeld believes the work will be worthwhile in informing the CDC and improving the agencys preparation. I can do the best I can now, he says. Its better than not having anything.

See the original post here:
This is how the CDC is trying to forecast coronaviruss spread - MIT Technology Review

Chelsea Manning Is Ordered Released From Jail – The New York Times

WASHINGTON A federal judge on Thursday ordered the release of Chelsea Manning, the former Army intelligence analyst who in 2010 leaked archives of military and diplomatic documents to WikiLeaks, and who was jailed last year for refusing to testify before a grand jury that is investigating the organization and its founder, Julian Assange.

The release came one day after Ms. Manning tried to kill herself and was hospitalized, according to her lawyers.

In a brief opinion, a Federal District Court judge overseeing the matter, Anthony J. Trenga, said that he also dismissed on Thursday the grand jury that Ms. Manning was refusing to testify before after finding that its business had concluded.

The court finds that Ms. Mannings appearance before the grand jury is no longer needed, in light of which her detention no longer serves any coercive purpose, Judge Trenga wrote.

However, he said, Ms. Manning would still have to pay $256,000 in fines for her defiance of the subpoena. The judge wrote that enforcement of the accrued, conditional fines would not be punitive but rather necessary to the coercive purpose of the courts civil contempt order.

Ms. Manning was originally jailed a year ago for contempt of court after initially refusing to testify about WikiLeaks and Mr. Assange, but was briefly released when the first grand jury expired. Prosecutors then obtained a new subpoena, and she was locked up again for defying it in May. The moves raise the possibility that prosecutors could start over a third time.

But supporters of Ms. Manning had believed that the grand jury was not set to terminate on March 12, raising the prospect that prosecutors and the judge decided to shut it down early to bring the matter to a close.

It is my devout hope that she is released to us shortly, and that she is finally given a meaningful opportunity to rest and heal that she so richly deserves, said her lawyer, Moira Meltzer-Cohen.

Joshua Stueve, a spokesman for the office of the U.S. attorney for the Eastern District of Virginia, declined to comment.

The archives that Ms. Manning provided to WikiLeaks in 2010, when she was an Army intelligence analyst posted in Iraq, helped vault the antisecrecy organization and Mr. Assange to global fame. The events took place years before their image and actions evolved with the publication of Democratic emails stolen by Russian hackers during the 2016 election.

Ms. Manning admitted sending the files to WikiLeaks in a court-martial trial. She also confessed to interacting online with someone who was probably Mr. Assange, but she said she had acted on principle and was not working for WikiLeaks.

Testimony showed that she had been deteriorating, mentally and emotionally, during the period when she downloaded the documents and sent them to WikiLeaks. Then known as Pfc. Bradley Manning, she was struggling with gender dysphoria under conditions of extraordinary stress and isolation while deployed to the Iraq war zone.

She was sentenced to 35 years in prison the longest sentence by far in an American leak case. After her conviction, she changed her name to Chelsea and announced that she wanted to undergo gender transition, but was housed in a male military prison and twice tried to commit suicide in 2016.

In January 2017, President Barack Obama commuted most of the remainder of her sentence shortly before he left office. But she was swept back up into legal trouble last year when prosecutors investigating Mr. Assange subpoenaed her to testify before a grand jury about their interactions.

Although prosecutors granted immunity for her testimony, Ms. Manning had vowed not to cooperate in the investigation, saying she had ethical objections, and she was placed in civil detention for contempt of court.

Separately last year, the Justice Department unsealed criminal charges against Mr. Assange, who was living in the Ecuadorean Embassy in London. Prosecutors initially charged him with a narrow hacking conspiracy offense, accusing him of agreeing to try to help Ms. Manning crack a password that would have let her log onto a military computer system under a different user account, covering her tracks.

Read more here:
Chelsea Manning Is Ordered Released From Jail - The New York Times