Assessing the intersection of open source and AI – VentureBeat

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!

Open source technology has been a driving factor in many of the most innovative developments of the digital age, so it should come as no surprise that it has made its way into artificial intelligence as well.

But with trust in AIs impact on the world still uncertain, the idea that open source tools, libraries, and communities are creating AI projects in the usual wild west fashion is creating yet more unease among some observers.

Open source supporters, of course, reject these fears, arguing that there is just as little oversight into the corporate-dominated activities of closed platforms. In fact, open source can be more readily tracked and monitored because it is, well, open for all to see. And this leaves us with the same question that has bedeviled technology advances through the ages: Is it better to let these powerful tools grow and evolve as they will, or should we try to control them? And if so, how and to what extent?

If anything, says Analytics Insights Adilin Beatrice, open source has fueled the advance of AI by streamlining the development process. There is no shortage of free, open source platforms capable of implementing even complex types of AI like machine learning, and this serves to expand the scope of AI development in general and allow developers to make maximum use of available data. Tools like Weka, for instance, allow coders to quickly integrate data mining and other functions into their projects without having to write it all from scratch. Googles TensorFlow, meanwhile, is one of the most popular end-to-end machine learning platforms on the market.

And just as weve seen in other digital initiatives, like virtualization and the cloud, companies are starting to mix-and-match various open source solutions to create a broad range of intelligent applications. Neuron7.ai recently unveiled a new field service system capable of providing everything from self-help portals to traffic optimization tools. The system leverages multiple open AI engines, including TensorFlow, that allow it to not only ingest vast amounts of unstructured data from multiple sources, such as CRM and messaging systems, but also encapsulate the experiences of field techs and customers to improve accuracy and identify additional means of automation.

One would think that with open source technology playing such a significant role in the development of AI that it would be at the top of the agenda for policy-makers. But according to Alex Engler of the Brookings Institution, it is virtually off the radar. While the U.S. government has addressed open source with measures like the Federal Source Code Policy, more recent discussions on possible AI regulations mention it only in passing. In Europe, Engler says open source regulations are devoid of any clear link to AI policies and strategies, and the most recently proposed updates to these measures do not mention open source at all.

Engler adds that this lack of attention could produce two negative outcomes. First, it could result in AI initiatives failing to capitalize on the strengths that open source software brings to development. These include key capabilities like increasing the speed of development itself and reducing bias and other unwanted outcomes. Secondly, there is the potential that dominance in open source solutions could lead to dominance in AI. Open source tends to create default standards in the tech industry, and while top open source releases from Google, Facebook, and others are freely available, the vast majority of projects they support are created from within the company that developed the framework, giving them an advantage in the resulting program.

This, of course, leads us back to the same dilemma that has plagued emerging technologies from the beginning, says the IEEEs Ned Potter. Who should draw the roadmap for AI to ensure it has a positive impact on society? Tech companies? The government? Academia? Or should it simply be democratized and let the market sort it out? Open source supporters tend to favor a free hand, of course, with the idea that continual scrutiny by the community will organically push bad ideas to the bottom and elevate good ideas to the top. But this still does not guarantee a positive outcome, particularly as AI becomes accessible to the broader public.

In the end, of course, there are no guarantees. If weve learned anything from the past, mistakes are just as likely to come from private industry as from government regulators or individual operators. But there is a big difference between watching and regulating. At the very least, there should be mechanisms in place to track how open source technologies are influencing AI development so at least someone has the ability to give a heads up if things are heading in a wrong direction.

Here is the original post:
Assessing the intersection of open source and AI - VentureBeat

What Is Machine Learning, and How Does It Work? Here’s a Short Video Primer – Scientific American

Machine learning is the process by which computer programs grow from experience.

This isntscience fiction, where robots advance until they take over the world.

When we talk about machine learning, were mostly referring to extremely clever algorithms.

In 1950mathematician Alan Turing argued that its a waste of time to ask whether machines can think. Instead, he proposed a game: a player has two written conversations, one with another human and one with a machine. Based on the exchanges, the human has to decide which is which.

This imitation game would serve as a test for artificial intelligence. But how would we program machines to play it?

Turing suggested that we teach them, just like children. We could instruct them to follow a series of rules, while enabling them to make minor tweaks based on experience.

For computers, the learning process just looks a little different.

First, we need to feed them lots of data: anything from pictures of everyday objects to details of banking transactions.

Then we have to tell the computers what to do with all that information.

Programmers do this by writing lists of step-by-step instructions, or algorithms. Those algorithms help computers identify patterns in vast troves of data.

Based on the patterns they find, computers develop a kind of model of how that system works.

For instance, some programmers are using machine learning to develop medical software. First, they might feed a program hundreds of MRI scans that have already been categorized. Then, theyll have the computer build a model to categorize MRIs it hasnt seen before. In that way, that medical software could spot problems in patient scans or flag certain records for review.

Complex models like this often require many hidden computational steps. For structure, programmers organize all the processing decisions into layers. Thats where deep learning comes from.

These layers mimic the structure of the human brain, where neurons fire signals to other neurons. Thats why we also call them neural networks.

Neural networks are the foundation for services we use every day, like digital voice assistants and online translation tools. Over time, neural networks improve in their ability to listen and respond to the information we give them, which makes those services more and more accurate.

Machine learning isnt just something locked up in an academic lab though. Lots of machine learning algorithms are open-source and widely available. And theyre already being used for many things that influence our lives, in large and small ways.

People have used these open-source tools to do everything from train their pets to create experimental art to monitor wildfires.

Theyve also done some morally questionable things, like create deep fakesvideos manipulated with deep learning. And because the data algorithms that machines use are written by fallible human beings, they can contain biases.Algorithms can carry the biases of their makers into their models, exacerbating problems like racism and sexism.

But there is no stopping this technology. And people are finding more and more complicated applications for itsome of which will automate things we are accustomed to doing for ourselves--like using neural networks to help run power driverless cars. Some of these applications will require sophisticated algorithmic tools, given the complexity of the task.

And while that may be down the road, the systems still have a lot of learning to do.

View post:
What Is Machine Learning, and How Does It Work? Here's a Short Video Primer - Scientific American

Getting machine learning into production is hard the MCubed webcast is here for support DEVCLASS – DevClass

The MCubed webcast returns this week to tackle a whole other beast: Continuous Delivery for Machine Learning. Join us on October 7th at 11am BST (thats 12 oclock for you CEST peeps) to get into the nitty gritty of the operational side of ML.

If youve ever worked with an application that uses some form of machine learning, youll know that some component or other is always evolving: If it isnt the training data thats changing, youll surely come across a model that needs updating, and if all is well in those areas, theres a good chance a feature request is waiting for implementation so code modifications are due.

In regular software projects, we already know how to automatically take care of changes and make sure that we have a way of keeping our systems up to date without (too many) manual steps. The number of variables at play in ML however make it really tricky to come up with similar processes in that discipline, which is why it is often cited as one of the major roadblocks in getting machine learning-based applications into production.

For the second episode of our free MCubed webcast on October 7th, we therefore decided to sit down with you and have an in-depth look at how to tackle the operational side of ML. Joining in will be DevOps and data expert Danilo Sato, who helped quite a few organisations to set up a comprehensible continuous delivery (CD) workflow for their machine learning projects.

You might know Mr Sato from a popular article series on CD4ML, however his work reaches far beyond that. In his 2014 book DevOps in Practice: Reliable and Automated Software Delivery he already shared insights from working on all sorts of platform modernisation and data engineering projects, that also informed some of the good practices he recently investigated.

On the webcast, Sato will discuss how the principles of Continuous Delivery apply to machine learning applications, and walk you through the technical components necessary to implement a system that takes care of CD for your ML project. Hell walk you through the differences between MLOps and CD4ML, take a closer look at the peculiarities of version control and artifact repositories in ML projects, give you some tips on what to observe, and introduce you to the many different ways a model can be deployed.

And in case you have all of this figured out already, Danilo Sato will provide a look into the future of machine learning infrastructure as well as give you some food for thought on open challenges such as explainability and auditability.

The MCubed webcast on October 7th will start 11am BST (12pm CEST) with a roundup of the latest in machine learning-related software development news, but then its straight on to the talk.

Dont forget to let us know if you have any topics youd like to learn more about, or if you are interested in practical experience reports from specific industries we really want to make these webcasts worth your time, so every hint helps. Also, reach out if you want to share some tricks yourself, we always love to hear from you!

Register here to receive a quick reminder on the day were really looking forward to seeing you on Thursday!

Go here to read the rest:
Getting machine learning into production is hard the MCubed webcast is here for support DEVCLASS - DevClass

Immunis.AI Chosen by Amazon Web Services to Showcase its Cloud-Based Genomic Pipeline for Machine Learning – Business Wire

ROYAL OAK, Mich.--(BUSINESS WIRE)--Immunis.AI, Inc., an immunogenomics platform company developing noninvasive blood-based tests to optimize patient care, today announced that Amazon Web Services (AWS) will showcase the companys cloud-based genomic pipeline for machine learning. In collaboration with Mission Cloud Services, the platform will be highlighted in a virtual event, Behind the Innovation, hosted by AWS, today.

Immunis.AI engaged Mission, a partner with deep life science expertise, to design an AWS architecture to leverage Amazon S3 alongside a backend data pipeline using Amazon EC2 and Amazon EBS infrastructure. The challenge Immunis.AI faced was data ingestion and real-time analytics of its large immunotranscriptomic data sets and parallel processing of thousands of samples through its machine learning pipelines. Through the collaboration with Mission and AWS, the ingestion of data by Immunis.AI, which took two weeks to finish manually, can be completed within hours.

The virtual event, which will be held Tuesday, October 5th, from 9 to 10:30 a.m. Pacific Time (12 to 1:30 p.m. Eastern Time). For more information, or to register for the virtual event, click here.

Machine learning presents its own set of unique challenges and managing the large data sets is a major problem in the field of genomics. Working with Mission and AWS to design an architecture to streamline our data ingestion and analytics, has enabled us to drastically accelerate development of our immunogenomic tests to improve diagnosis and treatment of cancer patients, said Geoffrey Erickson, a founder and Senior VP of Corporate Development at Immunis.AI, who will be presenting at the event. We are pleased to have been chosen by AWS to highlight our architecture and our powerful partnership and the life changing outcomes it enables.

While it is still evolving, Mission provided Immunis.AI with a tested and proven blueprint for a viable research oriented genomic platform, all backed by AWS and ready to scale quickly and economically, said Jonathan LaCour, Chief Technology Officer & Sr. Vice President, Service Delivery. We are pleased to support Immunis.AIs important mission to develop tests that can improve the lives of cancer patients and proud of its Mission-built AWS infrastructure that is helping them.

Immunis.AI continues to leverage its successful blueprint across several clinical studies, as it develops and plans to commercialize its products. Mission will also continue to help Immunis.AI with data modernization, including data lake and analytics initiatives on AWS.

About Immunis.AI, Inc.

IMMUNIS.AI is a privately held immunogenomics company with a patented liquid biopsy platform that offers unique insights into disease biology and individualized assessment. The Intelligentia platform combines the power of the immune system, RNAseq technology and Machine Learning (ML) for the development of disease-specific signatures. This proprietary method leverages the immune systems surveillance apparatus to overcome the limitations of circulating tumor cells (CTCs) and cell free DNA (cfDNA). The platform improves detection of early-stage disease, at the point of immune-escape, when there is the greatest opportunity for cure. For more information, please visit our website: https://immunis.ai/

Read more:
Immunis.AI Chosen by Amazon Web Services to Showcase its Cloud-Based Genomic Pipeline for Machine Learning - Business Wire

Come And Do Research In Particle Physics With Machine Learning In Padova! – Science 2.0

I used some spare research funds to open a six-months internship to help my research group in Padova, and the call is open for applications at this site (the second in the list right now, the number is #23584). So here I wish to answer a few questions from potential applicants, namely:1) Can I apply?2) When is the call deadline?3) What is the salary?4) What is the purpose of the position? What can I expect to gain from it?

5) What will I be doing if I get selected?

Answers:1 - You can apply if you have completed a masters degree in a scientific discipline (physics, astronomy, mathematics, statistics, computer science) not earlier than one year ago. You are supposed to possess some programming skills, although your wish to learn is more important than your knowledge base.

2 - The deadline is October 16. The application process is simple, but you want to look into the electronic procedure early on to verify that you have the required documents.

3 - The salary is in line with the wage of Ph.D. students enrolled in the course in Padova. I do not know the net after taxation,but it is of the order of 1100 euros per month. This is not a lot of money, but it is enough to live by in Padova for a student. You won't get rich, but your focus should be to gain experience and titles for your future career!

4 - The purpose of the internship is to endow the recipient with skills in machine learning applied to fundamental physics research. Ideally, the recipient would be interested to apply for a Ph.D. at the University of Padova after finishing the internship, and the research work would be a very useful asset for his or her CV, along with the probable authorship of a publication in machine learning applications to particle physics; but the six months of work may be a good training also for graduate students who wish to move out of academia, to pursue a career in industry. The point is that what we will be working on together is a topic at the real bleeding edge of innovative applications of deep learning - something which will be invaluable in the future both in research and in industry. I will explain more what this is about below.

5 - If you get selected, you will join my research team, which is embedded in the MODE collaboration of which I am the leader. We want to use differentiable programming techniques (available through python libraries offered by packages such as Pytorch or TensorFlow) to create software that studies the end-to-end optimization of complex instruments used for particle physics research or for industrial applications such as muon tomography or proton therapy.

More in detail, we are currently tackling an "easy" application of deep-learning-powered end-to-end optimization which consists in finding the most advantageous layout of detection elements in a muography apparatus. Muon tomography consists in detecting the flow of cosmic-ray muons in and out of a unknown volume, of which we wish to determine the inner material distribution. This has applications to volcanology (where is the magma?), archaeology (study of hidden chambers in ancient buildings or pyramids), foundries (where is the melted material?), nuclear waste disposal (is there uranium in this box of scrap metal?), or detecting defects in pipelines or other industrial equipment.

To find the optimal layout we consider geometry, technology and cost of the detector as parameters, and we find the optimal solution by maximizing a utility function connected to how well the imaging is performed in a given time, and the cost of the apparatus and other constraints. So this is a relatively simple application of differentiable programming - you can pull it off if you model with continuous functions the various elements of the problem. If we manage to create a good software product we will share it freely, and then move on to some harder detector optimization problem (there is a long list of candidates). Actually, we are starting in parallel to study the optimization of calorimeters for particle detectors, which is a much, much more ambitious project which a fresh new Ph.D. student working with me, Federico Nardi, will investigate, again within the MODE collaboration.

So, if you are a bright master graduate and you want to deepen your skills in machine learning, please consider applying! If we select you, we will have loads of fun attacking these hard problems together!

If you need more information, please feel free to email me at dorigo (at) pd (dot) infn (dot) it. Thank you for your interest, and share this information with other potential applicants!

Read more:
Come And Do Research In Particle Physics With Machine Learning In Padova! - Science 2.0

Why automation, artificial intelligence and machine learning are becoming increasingly critical for SOC operations – Security Magazine

Why automation, artificial intelligence and machine learning are becoming increasingly critical for SOC operations This website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more. This Website Uses CookiesBy closing this message or continuing to use our site, you agree to our cookie policy. Learn MoreThis website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more.

Read the rest here:
Why automation, artificial intelligence and machine learning are becoming increasingly critical for SOC operations - Security Magazine

The AI & Machine Learning community stands ready to help in the climate crisis battle – ITProPortal

The worlds on fire.

For three days in August, 7 billion tons of rain fell on the peak of Greenland, which is just not the largest amount since records began 71 years back, but the first time we know that rain, not snow, fell on the countrys highest peak. Wildfires in Siberia broke another terrifying record for annual fire-related emissions of carbon dioxide, losing almost 19,300 square miles (500,000 square kilometers) of vegetation to the fires. And in the same month, the latest (sixth) scientific report from the Intergovernmental Panel on Climate Change sounded the emergency alarm yet again on the need for strong and sustained reductions in emissions of carbon dioxide and other greenhouse gases to try and save a common future for us all.

We might have known this for a while, but the rate of extreme weather events and the accumulation of more and more data about the global emergency is now inescapable. Its genuinely no exaggeration to say were in a fight for survival now. There are also more and more financial and business knock-on effects of all this that have already cost global economies uncounted billions, indeed more: Capgemini research shows that in the past twenty years, there were 7,348 major recorded disaster events claiming 1.23 million lives, affecting 4.2 billion people and resulting in approximately $3 trillion in global economic losses.

For sure, were going to need to do a lot more than just recycle our soda cans or eat a little less meat per week (though we need to keep on doing all that, too); scientists are now talking of the need for serious, large-scale geoengineering to try and save us. Climate change should be on every organizations agendabut the IT world, which (rightly) gets criticized for its less than stellar record on exorbitant electricity consumption, as part of general economic activity (which is rising again), has a particular responsibility to help.

Why? Because we burn a lot of kilowatts, but also because a lot of smart people work in our world, many of whom are deeply concerned about the threat of anthropogenic climate change. As a global citizen and IT professional, I feel this concern too: and I also work in the AI (Artificial Intelligence) world. So I asked myself, what can AI and AI professionals do to help here: and this is what I found.

At the top level, AI provides powerful tools to researchers, engineers, chemists, biologists, town planners and policymakersin short, everyone is trying to make a positive difference. All these people need the very best, most recent, most granular data to make their interventions or design remediation techniques, which will absolutely include carbon capture, Greener transport and new post-carbon industries and ways of living. But AI, or more specifically, machine learning, is also already lending a hand in a variety of practical ways and climate crisis use cases:

The potential of machine learning in this space has already been called out by the EU, which has stated, in a report on the potential of AI to achieve its ambitious Green Deal targets, noted that, The transformative potential of Artificial Intelligence to contribute to the achievement of the goals of a green transition have been increasingly and prominently highlighted [due to its ability to] accelerate the analysis of large amounts of data to increase our knowledge base, allowing us to better understand and tackle environmental challenges, especially around relevant information for environmental planning, decision- making, management and monitoring of the progress of environmental policies. And as Brussels also points out, AI-generated information could also help consumers and businesses to adapt towards more sustainable behavior, among other potential benefits.

That same study does point out the downside, or potential downside: AI could also negatively contribute via unforeseen consequences like automatically more efficient products might actually have the effect of causing users to give up control over their energy consumption and over-consume.

Yesbut we in the machine learning sector are very conscious of these issues, as are national governments and other legislators. But I am convinced that AI can, and should, play a central and positive role in helping put out the global fire, and could also be used by companies to help start incorporating the impact of climate change into their future planning processes.

In our own modest way, were eating our down dog food ourselves at the company I work for, H2O.ai. Our technology has been used for a number of positive climate projects and initiatives, including our work with a non-profit focused on wildlife conservation and research called Wildbook, which is blending structured wildlife research with AI, citizen science, and computer vision to speed population analysis and develop new insights to help fight the extinction of threatened species like the elephant.

Could we be doing more? Yes. And we need to. Could we all be doing more? Yes, and we need to. I believe that the climate emergency can be controlled, and a climate AI culture emerging between technologists, policymakers, domain experts, philosophers and the open-source community to optimize the design and deployment of helpful AI tools could really help.

Mark Bakker, Regional Lead Benelux, H2O.ai

Originally posted here:
The AI & Machine Learning community stands ready to help in the climate crisis battle - ITProPortal

What is data poisoning and how do we stop it? – TechRadar

The latest trend in businesses is the adoption of machine learning models to bolster AI systems. However, as this process gets more and more automated, this naturally puts them at greater risk of new emerging threats to the function and integrity of AI, including data poisoning.

About the author

Spiros Potamitis is Senior Data Scientist at Global Technology Practice at SAS.

Below, discover what data poisoning is, how it threatens business systems, and finally how to defeat it and win the fight against those who wish to manipulate data for their own gain

Before we discuss data poisoning, its worth revisiting how machine learning models work. We train these models to make predictions by feeding them with historical data. From these data, we already know the outcome that we would like to predict in the future and the characteristics that drive this outcome. These data teach the model to learn from the past. The model can then use what it has learned to predict the future. As a rule of thumb, when more data are available to train the model, its predictions will be more accurate and stable.

AI systems that include machine learning models are normally developed by experienced data scientists. They thoroughly examine and explore the data, remove outliers and run several sanity and validation checks before, during and after the model development process. This means that, as far as possible, the data used for training genuinely reflect the outcomes that the developers want to achieve.

However, what happens when this training process is automated? This does not very often occur during development, but there are many occasions when we want models to continuously learn from new operational data: on the job learning. At that stage, it would not be difficult for someone to develop misleading data that would directly feed into AI systems to make them produce faulty predictions.

Consider, for example, Amazon or Netflixs recommendation engines. Think how easy it is to change the recommendations you receive by buying something for someone else. Now consider that it is possible to set up bot-based accounts to rate programs or products millions of times. This will clearly change ratings and recommendations, and poison the recommendation engine.

This is known as data poisoning. It is particularly easy if those involved suspect that they are dealing with a self-learning system, like a recommendation engine. All they need to do is make their attack clever enough to pass the automated data checkswhich is not usually very hard.

The other issue with data poisoning is that it could be a long, slow process. Hackers can afford to take their time to change the data by feeding in a few results at a time. Indeed, this is often more effective, because it is harder to detect than a massive influx of data at a single point in timeand significantly harder to undo.

Fortunately, there are steps that organizations can take to prevent data poisoning. These include

1. Establish an end-to-end ModelOps process to monitor all aspects of model performance and data drifts, to closely inspect system function.

2. For automatic re-training of models, establish a business flow. This means that your model will have to go through a series of checks and validations by different people in the business before the updated version goes live.

3. Hire experienced data scientists and analysts. There is a growing tendency to assume that everything technical can be handled by software engineers, especially with the shortage of qualified and experienced data scientists. However, this is not the case. We need experts who really understand AI systems and machine learning algorithms, and who know what to look for when we are dealing with threats like data poisoning.

4. Use open with caution. Opensource data are very appealing because they provide access to more data to enrich existing sources. In principle, this should make it easier to develop more accurate models. However, these data are just that: open. This makes them an easy target for fraudsters and hackers. The recent attack on PyPI, which flooded it with spam packages, shows just how simple this can be.

It is vital that businesses follow the recommendations above so as to defend against the threat of data poisoning. However, there remains a crucial means of protection that often gets overlooked: human intervention. While businesses can automate their systems as much as they would like, it is paramount that they rely on the trained human eye to ensure effective oversight of the entire process. This prevents data poisoning from the offset, allowing organizations to innovate through insights, with their AI assistants beside them.

Continue reading here:
What is data poisoning and how do we stop it? - TechRadar

Pompeo: Sources for Yahoo News WikiLeaks report ‘should all be prosecuted’ – Yahoo News

Former CIA Director and former Secretary of State Mike Pompeo on Wednesday called for the criminal prosecution of sources who spoke to Yahoo News for a story detailing proposals by the intelligence agency in 2017 to abduct WikiLeaks founder Julian Assange and discussions within the Trump administration and CIA to possibly even assassinate him.

Pompeo, appearing on Megyn Kellys podcast, was asked to respond to the Yahoo News story, which was based on interviews with 30 former U.S. intelligence and national security officials with knowledge of the U.S. governments efforts against WikiLeaks.

I cant say much about this other than whoever those 30 people who allegedly spoke to one of these [Yahoo News] reporters they should all be prosecuted for speaking about classified activity inside the Central Intelligence Agency, Pompeo said.

At the same time, Pompeo declined to respond to many of the details in the Yahoo News account and confirmed that pieces of it are true, including the existence of an aggressive CIA campaign to target WikiLeaks in the aftermath of the organizations publication of highly sensitive so-called Vault 7 documents revealing some of the CIAs hacking tools and methods.

When bad guys steal those secrets we have a responsibility to go after them, to prevent [that] from happening, Pompeo said. We absolutely have a responsibility to respond. ... We desperately wanted to hold accountable those individuals that had violated U.S. law, that had violated requirements to protect information and had tried to steal it. There is a deep legal framework to do that. And we took actions consistent with U.S. law to try to achieve that.

Former U.S. Secretary of State Mike Pompeo. (Joe Raedle/Getty Images)

Pompeos comments came as some human rights activists, civil liberties groups and supporters of Assange said the revelations by Yahoo News should be investigated and were grounds to drop the Justice Departments efforts to extradite Assange from a British prison in order to face criminal charges in the U.S. for publishing classified government secrets in violation of the World War I-era Espionage Act as well for allegedly conspiring to hack into a classified U.S. government network.

Story continues

We now know that this unprecedented criminal case was launched in part because of the genuinely dangerous plans that the CIA was considering, said Ben Wizner, director of the American Civil Liberties Unions Speech, Privacy and Technology Project. This provides all the more reason for the Biden Justice Department to find a quiet way to end this case.

Also weighing in about the Yahoo News story was Nils Melzner, the United Nations Special Rapporteur on Torture.This is not about the law. It is about intimidating journalism; its about suppressing press freedom; its about protecting immunity for state officials, he said in a video he posted on Twitter. Assanges case has become impossible to ignore, he added. And I would encourage journalists from all media outlets to look deeply into this case, assemble all the evidence and expose misconduct, because the public deserves to know the truth.

Although the Justice Department under two attorneys general appointed by President Trump brought indictments against Assange, federal prosecutors under President Bidens attorney general, Merrick Garland, are continuing to pursue the case. They have filed appeals of a British judges ruling earlier this year that Assange should not be turned over to the U.S. government because he would pose a risk of suicide in a U.S. prison.

Assanges lawyers were due on Wednesday to file responses to the Justice Departments arguments and are actively considering ways to raise issues of government misconduct based in part on many of the details in the Yahoo News story. Among them is the revelation that in the aftermath of the Vault 7 leak, viewed at the time as the largest data loss in the CIAs history, Pompeo was enraged and demanded a multi-pronged campaign to dismantle WikiLeaks. Publicly, he described the group as a non-state hostile intelligence service. But privately, he pushed for aggressive action at meetings with top Trump administration officials, including a snatch operation to abduct Assange from the Ecuadorian Embassy in London.

Sources told Yahoo News that at the White House and CIA there were also discussions regarding a possible assassination, although former officials said the idea of killing Assange was not taken seriously. But when White House lawyers learned about some of the agencys plans targeting Assange, particularly Pompeos rendition proposals, they raised objections, resulting in one of the most contentious intelligence debates of the Trump presidency.

WikiLeaks founder Julian Assange holding a news conference at the Ecuadorian Embassy in London, August 2014. (John Stillwell/Pool via Reuters)

Pompeos comments on Kellys podcast came the day after he appeared on Glenn Becks podcast and asserted, Im all about a big, bold, strong First Amendment. But his call Wednesday for the criminal prosecution of sources who spoke to Yahoo News drew a strong rebuke from a member of Assanges legal team.

I find it highly disturbing that his reaction is to try to prevent information about misconduct from being known by the American people, said Barry Pollack, Assanges U.S. lawyer.

Wizner, the ACLU lawyer, said Pompeos comments effectively just verified the truth of the [Yahoo News] story. Because the only reason to prosecute someone is that they revealed legitimate classified information. ... This was public interest journalism of the first order and the question is whether the public has a right to know that the government is engaged in this kind of conduct.

When first asked about the Yahoo News story by Kelly, Pompeo responded, It makes for pretty good fiction. But when pressed by the host whether that meant he was denying what Yahoo News reported, he acknowledged there are pieces of it that are true.

Were we trying to protect American information from Julian Assange and WikiLeaks? Absolutely, yes. Did our Justice Department believe it had a valid claim that would result in the extradition of Julian Assange to stand trial? Yes. I supported that effort, for sure. Did we ever engage in activity that was inconsistent with U.S. law? We are not permitted by U.S. law to conduct assassinations. We never acted in a way that was inconsistent with that. ... We never conducted planning to violate U.S. law not once in my time.

He did not address any of the details about other actions the CIA was contemplating, such as Assanges possible abduction, or steps U.S. intelligence actually took, including conducting audio and visual surveillance of Assange inside the Ecuadorian Embassy or monitoring the communications and travels of his associates throughout Europe.

But Pompeo did take issue with a statement made by Trump, who had embraced WikiLeaks during the 2016 campaign after it published Democratic Party emails embarrassing to Hillary Clinton. Asked for comment in the Yahoo News story, Trump said that Assange was being treated very badly.

Pressed by Kelly if he agreed with that assessment, Pompeo said: No. Assange treated the U.S. and its people very badly.

____

Read more from Yahoo News:

Read this article:

Pompeo: Sources for Yahoo News WikiLeaks report 'should all be prosecuted' - Yahoo News

WikiLeaks turns 15 with founder Assange behind bars as threat to powers that be – United News of India

More News05 Oct 2021 | 10:38 PM

Colombo/New Delhi, Oct 5 (UNI) Foreign Secretary Harsh Vardhan Shringla, during his meeting with Sri Lankan President Gotabaya Rajapaksa on Tuesday thanked the president for his guidance and close cooperation in the defence and security spheres.

Washington, Oct 5 (UNI/Sputnik) Dozens of Afghans who made it onto Ukraines evacuation list and traveled long distances with their families to Kabul for flights out of Afghanistan remain in the country amid deteriorating, life-threatening conditions, Human Rights Watch (HRW) said on Tuesday.

Dhaka, Oct 5 (UNI) Bangladeshs Foreign Minister AK Abdul Momen has said that firing will be done along the border, if necessary, to halt smuggling of illegal arms, drugs and human trafficking from neighboring Myanmar.

Bucharest, Oct 5 (UNI/Xinhua) The Romanian coalition government led by Prime Minister Florin Citu collapsed on Tuesday, losing a censure motion initiated by main opposition Social Democratic Party (PSD) and supported by all oppositions in the parliament plus the second largest party in the newly-split ruling coalition.

Washington, Oct 5 (UNI/Sputnik) A protracted pandemic could cost the global economy some $5.3 trillion over the next five years if the gap in access to vaccines and health care remains unequal across the globe, IMF Managing Director Kristalina Georgieva said on Tuesday.

See the original post here:

WikiLeaks turns 15 with founder Assange behind bars as threat to powers that be - United News of India