I once lived with Julian Assange and he’s making a big mistake – TheArticle

WikiLeaks founder Julian Assange is in the midst of hearings in his bid to stop extradition to the US. The authorities there want him in connection with his websites publication of classified US documents throughout 2010 and early 2011.

To many of us, this will feel like just the latest in a decade of exhausting legal dramas around Assange. He was arrested in Sweden on unrelated sexual assault and rape allegations in the summer of 2010, and then mounted a lengthy legal challenge to avoid extradition to Sweden via the European Arrest Warrant.

Having exhausted every legal avenue, up to and including the UK Supreme Court, in a bid to avoid what was a fairly conventional European Arrest Warrant, Assange fled bail and sought sanctuary in the Ecuadorian embassy in London.

Assange remained locked in this diplomatic stand-off for years, the relationship with his hosts souring with every passing month. Eventually, relations were so bad that the government of Ecuador negotiated a deal with UK and US authorities that allowed police into the embassy to remove and arrest Assange, leading to his current confinement in Belmarsh prison, where he awaits extradition.

Its a messy backstory but WikiLeaks activities in the years between Chelsea Mannings release of the Afghan War logs and the present day make it murkier still. On multiple occasions, the site published hacked information of questionable public interest, including material obtained by Anonymous, the hacktivist group.

In 2016, WikiLeaks went further still, publishing hacked emails from the Democratic National Committee and from a key Hillary Clinton aide. These were exploited relentlessly by Donald Trump and his presidential campaign, as well as by Fox News and similar outlets. They even led to the pizzagate conspiracy theory, which baselessly suggested that leading Democratic figures were part of a paedophile ring.

Those emails, it emerged, were hacked under the orders of the Russian government as part of their bid to interfere in the US election and aid Donald Trump. No one has offered any evidence suggesting WikiLeaks or Assange knew their Russian origin. However, when evidence suggesting this surfaced, Assange shamefully hinted his source may have been Seth Rich, a young Democratic staffer who had been murdered in Washington DC. Its a claim he must have known was false, and one that fuelled conspiracy theories and caused Richs family great distress.

Given his chaotic and damaging personal and professional track record, support for Assange has dwindled dramatically. He doesnt cut a particularly sympathetic figure to me and I used to work for him, and (briefly) live with him, at the time WikiLeaks was publishing the State Department cables. Hes a chaotic and mercurial boss who would often lie to the public and cover up his mistakes. He had troubling allies, including notorious anti-Semite Israel Shamir. Assange was also an unreliable ally to whistle-blowers of the world, at a time when they really needed help.

This makes the approach of his defence team all the more difficult to understand. They have, at various turns, suggested Assanges pre-trial detention amounted to unfair punishment (a hard argument given he previously skipped bail), that Assange cannot have a fair trial because the judge is biased, that the courtroom was chosen to make it hard for protestors to visit, or that surveillance of Assange by Ecuador in the Ecuadorian embassy, where he had sheltered himself by choice, was an outrage.

These arguments may or may not have merit, though they look like a clumsy attempt to throw everything at the wall in the hope something might stick on appeal. But they have one thing in common: theyre all about Julian Assange.

This tactic is a baffling one, because there are serious and global principles at stake: the extradition and subsequent prosecution of Assange for his work publishing Mannings leaks is a genuine menace to press freedom and free expression.

The Manning leaks revealed the callous disregard of US pilots for civilian casualties; the existence of death squads operating in Afghanistan; the real civilian toll of the Iraq invasion and subsequent civil war; the extent of US spying on the UN and other diplomats, and dozens of other matters of serious public interest.

Despite attempts to frame the extradition as one around hacking, the US is trying to prosecute Assange, a non-US citizen, under the Espionage Act, for his role in those 2010 publications. If Assange is found guilty for that today, why not the editors of the New York Times, Guardian, Le Monde and more tomorrow? For that matter why not me?

The free expression argument, and the accompanying politicisation of prosecuting Assange and not those others, is the best argument against Assanges extradition, and one even those who hate the man himself should rally behind. Doing so becomes much easier if Assange and his defence team make that argument much, much more clearly than they have done so far.

If he wishes to be a free man, Julian Assange needs to try something he has rarely, if ever, tried before: he needs to make this about more than just himself.

Excerpt from:
I once lived with Julian Assange and he's making a big mistake - TheArticle

Julian Assange Lawyer: What’s at Stake in Extradition Case Is Freedom of the Press – Democracy Now!

This is a rush transcript. Copy may not be in its final form.

AMY GOODMAN: This is Democracy Now!, democracynow.org, The War and Peace Report. Im Amy Goodman, as we turn now to the extradition hearing for WikiLeaks founder Julian Assange, which a British judge has suspended after four days of intense deliberations last week between Assanges lawyers and attorneys representing the U.S. government. Assange faces 18 charges of attempted hacking and breaches of the Espionage Act for his role in publishing classified documents exposing U.S. war crimes in Iraq and Afghanistan. He could be sentenced to up to 175 years in prison. Judge Vanessa Baraitser ordered the legal teams to reconvene in the middle of May for the remainder of the extradition hearing, where witnesses will be cross-examined. This is Julian Assanges father, John Shipton, outside Woolwich Crown Court last week.

JOHN SHIPTON: The oppression of journalism; the ceaseless malice directed against Julian Assange by the authorities; the 10-year-long arbitrary detention of Julian, as witnessed by the United Nations Working Group on Arbitrary Detention; the torture of Julian, as witnessed by Nils Melzer, the United Nations rapporteur on torture all of those reports are available. That is what will happen to journalists, publishers and publications, if this extradition, this political extradition, of Julian Assange is successful.

AMY GOODMAN: That was Julian Assanges father, John Shipton. Julian Assange has been incarcerated in Londons Belmarsh prison since last April. Since 2012, he had taken refuge in the Ecuadorian Embassy in London to avoid extradition to Sweden over sexual assault allegations, which he denied and which the Swedish government ultimately dropped. Assange even then wasnt as concerned about being extradited to Sweden, but that Sweden would then send him to the United States. During his time in political exile in the Ecuadorian Embassy, Assange was reportedly spied on by a Spanish security firm. Julian Assange says the CIA was behind the illegal 24/7 surveillance.

For more, were joined by, well, one of the people who was spied on, Jennifer Robinson, the human rights attorney whos been advising Julian Assange and WikiLeaks since 2010.

Jen Robinson, welcome back to Democracy Now! Thanks for joining us from London. Can you describe the four days of hearings, just physically in the courtroom in London, and what Julian Assange faces?

JENNIFER ROBINSON: Obviously, weve just had a week of hearings. Julian Assange faces, as you said, 175 years in prison for publications back in 2010 that were released to WikiLeaks by Chelsea Manning. And I think its important to remember what this case is really about and the publications for which hes being prosecuted and sought for extradition. That includes Iraq War Logs, the Afghan War Diaries, showing civilian casualties and abuse of detainees in Iraq and Afghanistan, war crimes, human rights abuse. The same with Cablegate war crimes, human rights abuse, corruption the world over.

So, for four days last week, there was a packed-out courtroom filled with the public gallery was packed, the journalist section was packed to finally hear, after 10 years of the U.S. preparing this case against WikiLeaks, a grand jury investigation that was opened under the Obama administration and an indictment pursued now by the Trump administration. We finally heard the U.S. case. And, of course, we heard nothing new, nothing new since Chelsea Mannings prosecution back in 2012.

What is important, though, is that what the court finally heard is the defense case. And a number of arguments were put forward by our team, including the Espionage Act. This is an unprecedented use of the Espionage Act against a publisher, which is, of course, a political offense and ought to be barred from under the terms of the U.S.-U.K. extradition treaty. There should be extradition should be barred on that basis.

We also heard evidence about the grave threat that this poses to press freedom, not just for journalists inside the United States, but for journalists everywhere around the world, because of the precedent this case sets, that the United States could seek to extradite and prosecute journalists and publishers from around the world for publishing truthful information about the United States.

We also heard evidence about how the United States indictment has misrepresented the facts, including making the false allegation that Julian Assange had recklessly and deliberately put lives at risk. And we heard evidence in the court this week about the technological security measures that WikiLeaks imposed upon their media partners and the redaction processes that were undertaken to protect anyone at risk in those publications.

It was a long week of hearings. And I think its important that people start to see the true facts of this. Of course, Chelsea Manning remains in prison in the United States right now, but we heard evidence from her prosecution, in these proceedings, demonstrating that Chelsea Manning had in fact provided this information to WikiLeaks based on her own conscience, having seen war crimes, the murder of civilians, the murder of journalists by United States forces, which is what drove her to release the material to WikiLeaks. So, it was a long week of hearings, an important one for Julian.

AMY GOODMAN: So, Jennifer Robinson, can you describe the courtroom where Julian Assange was held at the back of the courtroom, as is the custom? Was he in a cage? Was he able to hear the proceedings, consult? Were you in the front with the other lawyers? Youre his legal adviser.

JENNIFER ROBINSON: Thats correct. So, throughout the hearings, Julian was sat at the back of the courtroom, which is behind where we sit as his legal counsel, in, effectively, a glass box, in the dock. Now, this creates significant amount of difficulties for us as his legal team in communicating with him during the course of the proceedings, which was raised as a concern on the final day of the hearing. He sits behind us, which means while were paying attention to the judge and submissions in front, we cant see when hes raising concern or seeking clarification or offering information to us about what hes hearing in court. The entire courtroom, including the public gallery and journalists, were alerted to the fact whenever he wants to raise a question with us. And, of course, if hes whispering to us or trying to get our attention in the court, the U.S. prosecutors sitting right next to us in court can hear everything. So we made an application at the end of the week in order to allow him to leave the dock. And, of course, for your U.S. viewers, it would seem strange that a defendant who does not pose any security risk would not be permitted to sit next to their defense counsel, which is standard practice in the United States. But the judge refused our application.

We also heard evidence of the mistreatment that Julian suffered, not just the difficulties he has in court in communicating with us in a secure and confidential manner, but also the treatment that hes been receiving from prison authorities. Just on the first day of the hearing, we heard that he was handcuffed 11 times, strip-searched twice and had his legal papers interfered with and taken away from him. This is indicative of the kinds of treatment that hes been suffering, and is, of course, the most recent in a long history of difficulties that weve been having in preparing his case, with difficulties of access to him in the prison, difficulties in getting him getting sufficient time with him to review and take his instructions of the very complex evidence that needs to be presented in the court. And it goes to show, I think, the obstacles and the challenges that we face and that he faces in properly defending himself in these proceedings.

AMY GOODMAN: He said Wednesday, I am as much a participant in these proceedings as I am watching Wimbledon, again, complaining that he could not communicate with you, with the lawyers overall. Now, the U.S. attorneys argue that his case is not political. Explain what you think are the most significant war crimes that he provided evidence of and what it means if he came to this country. How is it possible he, an Australian citizen, faces 175 years for treason in the United States?

JENNIFER ROBINSON: Of course this case is inherently political, whether you look at the terms the offenses for which hes been charged, including numerous offenses under the Espionage Act, which encapsulate and capture traditional journalistic activities. The Espionage Act itself as an offense is a political offense in substance. But we also need to look at the political context in which this prosecution and extradition request comes. This is, of course, in the context of the Trump administration, a president who calls the media the enemy of the people. We have learnt, since Julian was arrested and this extradition request and superceding indictment came through, that the Obama administration had taken a decision not to prosecute under the Espionage Act because of what the so-called New York Times problem that is, that you cannot distinguish between the actions of WikiLeaks and The New York Times in receiving and publishing this information.

We also say that beyond the political nature of the offense and the political context in which he would be charged, the U.S. prosecution seemed to tried to argue this week, this past week, that what WikiLeaks did and Julian did in publishing this information was not a political act. And, of course, we heard evidence in the court about Julians very well-known political views, that we heard with respect to WikiLeaks and the aims and why WikiLeaks was created by him. We heard, with respect to the Iraq War Logs, WikiLeaks Julian saying, with the release, If lies can start a war, then the truth can stop them. And we heard evidence about how the publication of evidence of war crimes, in the context of the Iraq War, both with respect to, for example, Collateral Murder, which was evidence of a war crime and U.S. troops killing journalists and civilians, but also, more broadly, about torture of detainees how evidence of that in fact led to the Iraqi government withdrawing the immunity for U.S. troops and the ultimate withdrawal of American forces from Iraq. So, of course, what were seeing is that WikiLeaks not only published information of important human rights abuse it was certainly in the public interest, and for which theyve won journalism awards the world over but that in fact resulted in a change in U.S. policy. And we say that that makes it a political offense.

AMY GOODMAN: Finally, Jen Robinson, how is Julian Assanges health?

JENNIFER ROBINSON: We remain very concerned about his health. Of course, he had more than seven years inside the Ecuadorian Embassy without access to healthcare, because the U.K. government refused to recognize his asylum, an asylum that was granted to him by Ecuador, not to hide from Sweden, as your introduction suggested, but to protect him from U.S. extradition, the very outcome that hes facing right now.

Inside prison, he is in difficult conditions. This is a high-security prison. Hes been in effective isolation for much of the time hes been inside the prison. And you heard me earlier explain the treatment hes been suffering between the prison and the court each time for his hearing, including being handcuffed numerous times, strip searches and the like. This is, of course, compounding our existing concerns about his health. And we heard in court, too, psychiatric evidence thats being put before the court about concerns about his ability to withstand the sorts of treatment he will suffer in U.S. prisons under special administrative measures if he was returned to the United States. So it is a very serious situation and one that is under constant monitoring at our end.

AMY GOODMAN: Jen Robinson, I want to thank you for being with us, human rights attorney. She is legal adviser for Julian Assange and WikiLeaks since 2010.

When we come back, tomorrow is Super Tuesday. We go to Texas to speak with a candidate whos running in a primary race. Thats Jessica Cisneros, a 26-year-old immigration lawyer whos challenging Congressmember Henry Cuellar. Stay with us.

See more here:
Julian Assange Lawyer: What's at Stake in Extradition Case Is Freedom of the Press - Democracy Now!

Top 5 things to know about the state of artificial intelligence – TechRepublic

Artificial intelligence continues to grow rapidly. Tom Merritt breaks down the five things you need to know about AI, according to a report from Stanford University.

Every year the Human-Centered Artificial Institute at Stanford puts together the Artificial Intelligence Index Report, relying on experts from around the discipline, including folks at Harvard, Google Open AI, and more, to try to pin down where we are with artificial intelligence (AI). You should definitely read all 290 pages, but for now here are five things to know about the state of AI.

SEE: Artificial intelligence ethics policy (TechRepublic Premium)

That's just where the work is getting done and where the money flows. As far as results, AI seems to be helping make software work a little better. But, most of your human skills are just getting help from the competition, not being replaced for now.

We deliver the top business tech news stories about the companies, the people, and the products revolutionizing the planet. Delivered Daily

Image: iStockphoto/metamorworks

See the rest here:
Top 5 things to know about the state of artificial intelligence - TechRepublic

Artificial Intelligence to Improve the Precision of Mammograms – Imaging Technology News

March 7, 2020The study is based on the results obtained in the Digital Mammography (DM) DREAM Challenge, an international competition led by IBM where researchers from the Instituto de Fsica Corpuscular (IFIC, CSIC-UV) have participated along with scientists from the UPV's Institute of Telecommunications and Multimedia Applications (iTEAM).

The team of researchers from IFIC and the iTEAM UPV was the only Spanish group that reached the end of the challenge. To do so, they developed a prediction algorithm based on convolutional neuron networks, an artificial intelligence technique that simulates the neurons of the visual cortex and allows classifying images, as well as self-learning of the system. Principles related to interpreting x-rays were also applied, where the group has several patents. The Valencian team's results, along with the rest of the finalists, are now published in theJournal of the American Medical Association (JAMA Network Open).

"Participating in this challenge has allowed our group to collaborate in Artificial Intelligence projects with clinical groups of the Comunidad Valenciana," stated Alberto Albiol, tenured professor at UPV and member of the iTEAM group. "This has opened opportunities for us to apply the Machine Learning techniques, as they are proposed in the article," he added.

For example, the work carried out by Valencian researchers is being carried out in Artemisa, the new computing platform for artificial intelligence at the Instituto de Fsica Corpuscular funded by the European Union and the Generalitat Valenciana within the FEDER operating program of the Comunitat Valenciana for 2014-2020 for the acquisition of R+D+i infrastructures and equipment.

"Designing strategies to reduce operating costs of health care is one of the objectives of sustainably applying Artificial Intelligence," pointed out Francisco Albiol, researcher of the IFIC and participant in the study. "The challenges cover from the algorithm part to jointly designing evidence-based strategies along with the medical sector. Artificial Intelligence applied at a large scale is one of the most promising technologies to make health care sustainable," he noted.

The goal of the Digital Mammography (DM) DREAM Challenge is to involve a broad international scientific community (over 1,200 researchers from around the world) to evaluate whether or not Artificial Intelligence algorithms can be equal to or improve the interpretations of the mammograms carried out by radiologists.

"This DREAM Challenge allowed carrying out a rigorous and adequate evaluation of dozens of advanced deep learning algorithms in two independent databases," explained Justin Guinney, vice president of Computational Oncology at Sage Bionetworks and president of DREAM Challenges.

Led by IBM Research, Sage Bionetworks and Kaiser Permanente Washington Research Institute, the Digital Mammography DREAM Challenge concluded that, no algorithm by itself surpassed the radiologists, a combination of methods added to the evaluations of experts improved the accuracy of the exams. Kaiser Permanente Washington (KPW) and the Karolinska Institute (KI) of Sweden provided hundreds of thousands of unidentified mammograms and clinical data.

"Our study suggests that a combination of algorithms of Artificial Intelligence and the interpretations of the radiologists could result in a half million women per year not having to undergo unnecessary diagnostic tests in the United States alone," stated Gustavo Stolovitzky, the director of the IBM program dedicated to Translational Systems Biology and Nanotechnology in the Thomas J. Watson Research Center and founder of DREAM Challenges.

To guarantee the privacy of data and prevent the participants from downloading mammograms with sensitive data, the organizers of the study applied a working system from the model to the data. In the system, participants sent their algorithms to the organizers, who developed a system that applied them directly to the data.

"This focus on sharing data is particularly innovative and essential for preserving the privacy of the data," ensured Diana Buist, of the Kaiser Permanente Washington Health Research Institute. "In addition, the inclusion of data from different countries, with different practices for carrying out mammograms, indicates important translational differences in the way in which Artificial Intelligence can be used on different populations."

Mammograms are the most used diagnostic technique for the early detection of breast cancer. Though this detection tool is commonly effective, mammograms must be evaluated and interpreted by a radiologist, who uses their human visual perception to identify signs of cancer. Thus, it is estimated that there are 10% false positives in the 40 million women who undergo scheduled mammograms each year in the United States.

An effective artificial intelligence algorithm that can increase the radiologist's ability to reduce repeating unnecessary tests while also detecting clinically significant cancers would help increase mammograms' detection value.

For more information:www.upv.es

See more here:
Artificial Intelligence to Improve the Precision of Mammograms - Imaging Technology News

The intelligence community is developing its own AI ethics – C4ISRNet

The Pentagon made headlines last month when it adopted its five principles for the use of artificial intelligence, marking the end of a months-long effort with significant public debate over what guidelines the department should employ as it develops new AI tools and AI-enabled technologies.

Less well known is that the intelligence community is developing its own principles governing the use of AI.

The intelligence community has been doing its own work in this space as well. Weve been doing it for quite a bit of time, said Ben Huebner, chief of the Office of Director of National Intelligences Civil Liberties, Privacy, and Transparency Office, at an Intelligence and National Security Alliance event March 4.

According to Huebner, ODNI is making progress in developing its own principles, although he did not give a timeline for when they would be officially adopted. They will be made public, he added, noting there likely wouldnt be any surprises.

Fundamentally, theres a lot of consensus here, said Huebner, who noted that ODNI had worked closely with the Department of Defenses Joint Artificial Intelligence Center on the issue.

Key to the intelligence communitys thinking is focusing on what is fundamentally new about AI.

Bluntly, theres a bit of hype, said Huebner. Theres a lot of things that the intelligence community has been doing for quite a bit of time. Automation isnt new. Weve been doing automation for decades. The amount of data that were processing worldwide has grown exponentially, but having a process for handling data sets by the intelligence community is not new either.

What is new is the use of machine learning for AI analytics. Instead of being explicitly programmed to perform a task, machine learning tools are fed data to train them to identify patterns or make inferences before being unleashed on real world problems. Because of this, the AI is constantly adapting or learning from each new bit of data it processes.

Know all the coolest acronyms Sign up for the C4ISRNET newsletter about future battlefield technologies.

Subscribe

Enter a valid email address (please select a country) United States United Kingdom Afghanistan Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, The Democratic Republic of The Cook Islands Costa Rica Cote D'ivoire Croatia Cuba Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guinea Guinea-bissau Guyana Haiti Heard Island and Mcdonald Islands Holy See (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Israel Italy Jamaica Japan Jordan Kazakhstan Kenya Kiribati Korea, Democratic People's Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, The Former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic of Monaco Mongolia Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands Netherlands Antilles New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestinian Territory, Occupied Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Reunion Romania Russian Federation Rwanda Saint Helena Saint Kitts and Nevis Saint Lucia Saint Pierre and Miquelon Saint Vincent and The Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia and Montenegro Seychelles Sierra Leone Singapore Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and The South Sandwich Islands Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand Timor-leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States United States Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela Viet Nam Virgin Islands, British Virgin Islands, U.S. Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe

Thanks for signing up!

By giving us your email, you are opting in to the C4ISRNET Daily Brief.

That is fundamentally different from other IC analytics, which are static.

Why we need to sort of think about this from an ethical approach is that the government structures, the risk management approach that we have taken for our analytics, assumes one thing that is not true anymore. It generally assumes that the analytic is static, explained Huebner.

To account for that difference, AI requires the intelligence community to think more about explainability and interpretability. Explainability is the concept of understanding how the analytic works, while interpretability is being able to understand a particular result produced by an analytic.

If we are providing intelligence to the president that is based on an AI analytic and he asks--as he doeshow do we know this, that is a question we have to be able to answer, said Huebner. Were going to need to ensure that we have transparency and accountability in these structures as we use them. They have to be secure and resilient.

ODNI is also building an ethical framework to help employees implement those principles in their daily work.

The thing that were doing that we just havent found an analog to in either the public or the private sector is what were referring to as our ethical framework, said Huebner. That drive for that came from our own data science development community, who said We care about these principles as much as you do. What do you actually want us to do?

In other words, how do computer programmers apply these principles when theyre actually writing lines of code? The framework wont provide all of the answers, said Huebner, but it will make sure employees are asking the right questions about ethics and AI.

And because of the unique dynamic nature of AI analytics, the ethical framework needs to apply to the entire lifespan of these tools. That includes the training data being fed into them. After all, its not hard to see how a data set with an underrepresented demographic could result in a higher error rate for that demographic than the population as a whole.

If youre going to use an analytic and it has a higher error rate for a particular population and youre going to be using it in a part of the world where that is the predominant population, we better know that, explained Huebner.

The IC wants to avoid those biases due to concerns over privacy, civil liberties, and frankly, accuracy. And if biases are introduced into an analytic, intelligence briefers need to be able to explain that bias to policy makers so they can factor that into their decision making. Thats part of the concepts of explainability and interpretability Huebner emphasized in his presentation.

And because they are constantly changing, these analytics will require some sort of periodic review as well as a way to catalog the various iterations of the tool. After all, an analytic that was reliable a few months ago could change significantly after being fed enough new data, and not always for the better. The intelligence community will need to continually check the analytics to understand how theyre changing and compensate.

Does that mean that we dont do artificial intelligence? Clearly no. But it means that we need to think about a little bit differently how were going to sort of manage the risk and ensure that were providing the accuracy and objectivity that we need to, said Huebner. Theres a lot of concern about trust in AI, explainability, and the related concept of interpretability.

See more here:
The intelligence community is developing its own AI ethics - C4ISRNet

Google Uses DeepMind’s Artificial Intelligence to Fight Coronavirus: Is It Time to Trust AI? – Tech Times

Googleis using itsDeepMindartificial intelligence to use to combat thecoronavirusor COVID-19. With the coronavirus still spreading slowly but surely and no cure is yet to be seen, is there any hope that this AI might be able to help find the cure?

Read More:READ! Coronavirus Has Two Strains Which Will Make it Even More Difficult to Contain Since The Other Half Doesn't Know They Are Infected Until It's Too Late

A post that was published Thursday, DeepMind is now using itsAlphaFold systemto create "structure predictions of several under-studied proteins associated with SARS-CoV-2, the virus that causes COVID-19."

The predictions, however, have not been experimentally verified, but DeepMind is confident that the data will be useful to the scientists who have a better understanding of the novel coronavirus that it will be of use to them.

DeepMind stated thatunderstanding a protein's structureusually takes months or even longer. Of course, previous knowledge of the protein structures is another key, surely. AlphaFold is using cutting edge technology and methods to ascertain "accurate predictions of the protein structure" with no knowledge prior to the strain.

Assistance is helpful for sure, no matter where it comes from in the fight against the coronavirus. However, as stated by DeepMind, that AI has no prior knowledge of the protein and how to look for things that the scientists can use. So many what if's can happen: what if the AI didn't find something worthwhile? Does it even know what to look for? Can it be used realistically and applied in the real world? Questions come to mind, but we have to be optimistic, and thankfully, if there are any findings regardless of how minuscule they may be, it might still be a piece in the puzzle to find the cure, which we will all definitely benefit from.

Read More:10 Ways to Greet Someone in Style Without Getting Sick From Deadly Coronavirus

If there are any findings from DeepMind's artificial intelligence, scientists may verify, record, and then, if possible, find the cure or the vaccine to the deadly virus. The key to finding key components to help the production of a cure is a difficult science to be sure, and even with all the scientists crunching their brains 24/7, it will take a gigantic effort.

"Artificial intelligence could be one of humanity's most useful inventions. We research and build safe AI systems that learn how to solve problems and advance scientific discovery for all." so they advertise on their website, which brings us to their advances in all fields. So far, their systems have shown companies how to save energy, identify eye diseases, accelerating science, and, as they are partnered with Google, improves their products to use all over the world.

So DeepMind shows promise, and humanity is counting on their AI to try and solve one of the problems the whole world has to face every day. The cure needs to be found sooner rather than later before thevirus mutates again.

Read More:Guide to Proper Smartphone Cases Cleaning to Prevent Spread of Germs and Virus Like Coronavirus

2018 TECHTIMES.com All rights reserved. Do not reproduce without permission.

Continued here:
Google Uses DeepMind's Artificial Intelligence to Fight Coronavirus: Is It Time to Trust AI? - Tech Times

Airlines take no chances with our safety. And neither should artificial intelligence – The Conversation AU

Youd thinking flying in a plane would be more dangerous than driving a car. In reality its much safer, partly because the aviation industry is heavily regulated.

Airlines must stick to strict standards for safety, testing, training, policies and procedures, auditing and oversight. And when things do go wrong, we investigate and attempt to rectify the issue to improve safety in the future.

Its not just airlines, either. Other industries where things can go very badly wrong, such as pharmaceuticals and medical devices, are also heavily regulated.

Artificial intelligence is a relatively new industry, but its growing fast and has great capacity to do harm. Like aviation and pharmaceuticals, it needs to be regulated.

A wide range of technologies and applications that fit under the rubric of artificial intelligence have begun to play a significant role in our lives and social institutions. But they can be used in ways that are harmful, which we are already starting to see.

In the robodebt affair, for example, the Australian government welfare agency Centrelink used data-matching and automated decision-making to issue (often incorrect) debt notices to welfare recipients. Whats more, the burden of proof was reversed: individuals were required to prove they did not owe the claimed debt.

The New South Wales government has also started using AI to spot drivers with mobile phones. This involves expanded public surveillance via mobile phone detection cameras that use AI to automatically detect a rectangular object in the drivers hands and classify it as a phone.

Read more: Caught red-handed: automatic cameras will spot mobile-using motorists, but at what cost?

Facial recognition is another AI application under intense scrutiny around the world. This is due to its potential to undermine human rights: it can be used for widespread surveillance and suppression of public protest, and programmed bias can lead to inaccuracy and racial discrimination. Some have even called for a moratorium or outright ban because it is so dangerous.

In several countries, including Australia, AI is being used to predict how likely a person is to commit a crime. Such predictive methods have been shown to impact Indigenous youth disproportionately and lead to oppressive policing practices.

AI that assists train drivers is also coming into use, and in future we can expect to see self-driving cars and other autonomous vehicles on our roads. Lives will depend on this software.

Once weve decided that AI needs to be regulated, there is still the question of how to do it. Authorities in the European Union have recently made a set of proposals for how to regulate AI.

The first step, they argue, is to assess the risks AI poses in different sectors such as transport, healthcare, and government applications such as migration, criminal justice and social security. They also look at AI applications that pose a risk of death or injury, or have an impact on human rights such as the rights to privacy, equality, liberty and security, freedom of movement and assembly, social security and standard of living, and the presumption of innocence.

The greater the risk an AI application was deemed to pose, the more regulation it would face. The regulations would cover everything from the data used to train the AI and how records are kept, to how transparent the creators and operators of the system must be, testing for robustness and accuracy, and requirements for human oversight. This would include certification and assurances that the use of AI systems is safe, and does not lead to discriminatory or dangerous outcomes.

While the EUs approach has strong points, even apparently low-risk AI applications can do real harm. For example, recommendation algorithms in search engines are discriminatory too. The EU proposal has also been criticised for seeking to regulate facial recognition technology rather than banning it outright.

The EU has led the world on data protection regulation. If the same happens with AI, these proposals are likely to serve as a model for other countries and apply to anyone doing business with the EU or even EU citizens.

In Australia there are some applicable laws and regulations, but there are numerous gaps, and they are not always enforced. The situation is made more difficult by the lack of human rights protections at the federal level.

One prominent attempt at drawing up some rules for AI came last year from Data61, the data and digital arm of CSIRO. They developed an AI ethics framework built around eight ethical principles for AI.

These ethical principles arent entirely irrelevant (number two is do no harm, for example), but they are unenforceable and therefore largely meaningless. Ethics frameworks like this one for AI have been criticised as ethics washing, and a ploy for industry to avoid hard law and regulation.

Read more: How big tech designs its own rules of ethics to avoid scrutiny and accountability

Another attempt is the Human Rights and Technology project of the Australian Human Rights Commission. It aims to protect and promote human rights in the face of new technology.

We are likely to see some changes following the Australian Competition and Consumer Commissions recent inquiry into digital platforms. And a long overdue review of the Privacy Act 1988 (Cth) is slated for later this year.

These initiatives will hopefully strengthen Australian protections in the digital age, but there is still much work to be done. Stronger human rights protections would be an important step in this direction, to provide a foundation for regulation.

Before AI is adopted even more widely, we need to understand its impacts and put protections in place. To realise the potential benefits of AI, we must ensure that it is governed appropriately. Otherwise, we risk paying a heavy price as individuals and as a society.

View original post here:
Airlines take no chances with our safety. And neither should artificial intelligence - The Conversation AU

Skeptical of Bitcoin Believes to be the Most Powerful CEO – BTC Wires

Mar 8, 2020 09:48 UTC

| Updated:

Mar 8, 2020 at 09:48 UTC

By Rajat Gaur

Bitcoin is believed to be the first ever cryptocurrency to be introduced and hold the leadership of the entire crypto market. Bitcoin was first curated in 2008 by pseudonymous Satoshi Nakamoto. It was known to be a decentralized and disruptive cryptocurrency that was developed to release the threads of several digital currencies that came before it including double-spending and removing the necessity for a central authority.

Despite its plans to replace all fiat currencies on earth and accounted for the title of global currency used across the world. The only technology is recognized ten years old and requires second-layer technologies to stay ahead of the competition.

Bitcoin is comparatively slow to any of the crypto space, and mostly serves the need for a store of value or means of value transfer.

The very next largest crypto asset, Ethereum provides a variety of attributes comparatively more advanced of what Bitcoin offers. At present, there are thousands of altcoins in the cryptocurrency market and every altcoin offer different than the other. The emerging asset class has introduced the most effective CEO in the crypto market to be skeptical of Bitcoins future.

Check Tweet Here

During the tweet battle, Armstrong posted about how the early internet developed, and how it can be made better with advanced protocols. It is these advances that can improve the internet for the users.

In the very next tweet, Armstrong concludes with a statement that there are many more to introduce which blockchain will project crypto adoption from 50 million users to 5 million. The Coinbase CEO also believes that the blockchain manages to improve privacy, developer tool solutions, scalability, and decentralized identity.

Bitcoin has already registered as a first-mover advantage and brand recognition, and while these features can be layered over the Bitcoin protocol, the technology behind the first-ever cryptocurrency is lacking as compared to other altcoins in the crypto space.

Read the original here:
Skeptical of Bitcoin Believes to be the Most Powerful CEO - BTC Wires

F2Pool: an Introduction to the Renowned Mining Pool – CryptoNewsZ

F2Pool can be described as the worlds leading cryptocurrency mining pool for several leading cryptocurrencies such as Bitcoin, Ethereum, and Litecoin. F2Pool is the oldest mining pool and is based in China; presently it is the largest multi-currency mining pool around the world. F2Pool mines around 17% of all the blocks. F2Pool serves more than 100 countries and it is ranked as top 3 mining pool operator by more than 20 networks. They have played an important part in securing the blockchain infrastructure and educating the global community concerning cryptocurrency mining.

Handshake, a 10.2 million dollar decentralized domain name project, is supported by popular investors which include Sequoia Capital, SV Angel, and Andreessen Horowitz. The handshake can be described as a permissionless, decentralized naming protocol where every counterpart validates and takes charge of managing the root DNS naming zone; with the end goal of developing an alternative to the present naming systems and certificate authorities.

On 5th February, KDA (Kadena) was officially launched in the F2Pool. KDA is a public blockchain with the high-performance PoW (Proof-of-work) system that includes the benefits of ChainWeb technology. Moreover, the KDA platform integrates private blockchains, public apps and several other well-matched blockchains in one place. Furthermore, KDA is a token that is utilized as compensation for miners. It is a fee that the user pays for adding the transactions in the block.

F2 is a leading mining pool, has an association of miners, where every miner contributes to the power of the computer to detect the blocks. There are more than 2 million users active in the pool, where 50% are Chinese users. Also, the reward for mining is that the users will get a reward of 3% for using the F2Pool. Succinctly, this is the exclusive pool operating for P2P payments. The withdrawal fees in the F2Pool do not go beyond 4% and it also pays daily, which are subsequently sent to F2Pools wallet.

See original here:
F2Pool: an Introduction to the Renowned Mining Pool - CryptoNewsZ

EARN IT Act ignites Section 230 tug-o-war – Politico

With help from Cristiano Lima, John Hendel and Leah Nylen

Editors Note: Morning Tech is a free version of POLITICO Pro Technologys morning newsletter, which is delivered to our subscribers each morning at 6 a.m. The POLITICO Pro platform combines the news you need with tools you can use to take action on the days biggest stories. Act on the news with POLITICO Pro.

Section 230 latest: The battle over the bipartisan EARN IT Act, which could threaten tech giants legal liability protections, will continue next week when the Senate Judiciary Committee holds a hearing on the bill probably with testimony from law enforcement officials and leaders from the tech sector.

(Another) TikTok bill: The day after Republican Sen. Josh Hawley announced plans to introduce a bill banning federal employees from using TikTok on their work devices, the House passed similar legislation from Democratic Rep. Abigail Spanberger.

Coronavirus, contd: The FCC is under pressure from Congress to use the same authority and resources it deploys for disaster response to address the threat of the coronavirus.

A message from Business Roundtable:

American consumers, their devices and data constantly travel across state lines. Without a national privacy law, consumers will have inconsistent privacy protections from state to state. Learn more at privacy.brt.org.

HELLO FRIDAY! AND WELCOME TO MORNING TECH. Im your host, Alexandra Levine. On todays coronavirus misinformation monitor: Titos lays down the law that no, pouring vodka on yourself will not protect you from COVID-19. (Per the CDC, the company wrote on Twitter, hand sanitizer needs to contain at least 60% alcohol. Tito's Handmade Vodka is 40% alcohol.)

Got a news tip? Write Alex at alevine@politico.com or @Ali_Lev. An event for our calendar? Send details to techcalendar@politicopro.com. Anything else? Full team info below. And dont forget: add @MorningTech and @PoliticoPro on Twitter.

WHATS NEXT FOR THE HOTLY CONTESTED EARN IT ACT The bipartisan rollout of the EARN IT Act on Thursday sparked widespread pushback from tech industry leaders, civil liberties groups and others, while garnering plaudits from child abuse prevention advocates and the battle over the bill is just getting started.

Next up: The Senate Judiciary Committee will hold a Wednesday hearing on the bill, which would require companies to prove they are doing enough to curb child abuse online to keep their Section 230 protections. This hearing is only the beginning, Sen. Richard Blumenthal (D-Conn.) said on Thursday. Were eager to listen to critics or anyone else who has suggestions for improvement. We take them seriously.

On deck: Chairman Lindsey Graham (R-S.C.) told Cristiano hes planning to bring in trade groups that represent the tech industry to testify at next weeks hearing but not Attorney General William Barr, who this week separately unveiled new voluntary guidelines on combating child exploitation. Blumenthal said he hopes to hear testimony from law enforcement officials and child abuse prevention advocates, in addition to leaders from the tech sector.

But will it pick up steam in the Senate? The bill already has the backing of 10 senators four Republicans and six Democrats, including the top two officials on Senate Judiciary but a number of key lawmakers said theyre still weighing its merits. "I'm open to talking to them about it," Senate Commerce Chairman Roger Wicker (R-Miss.) said Wednesday. Sen. Rob Portman (R-Ohio), who helped lead the last major push to amend Section 230, is reviewing the EARN IT Act to see if it builds upon the passage of SESTA, spokeswoman Emily Benavides said.

TIKTOKS TOUGH WEEK, CONTINUED The House passed legislation on Thursday that, in a move aimed at protecting Americans from Chinese surveillance, would ban some airport workers use of TikTok on their government-issued phones. After the TSA last month banned employees from using the Chinese-owned video app for work, Rep. Abigail Spanberger (D-Va.) included an amendment in a bipartisan bill she co-sponsored, the Rights for Transportation Security Officers Act, codifying that TSA policy.

TikTok, like other Chinese companies, is required under Chinese law to share information with the government and its institutions, Spanberger said Thursday. Because it could become a tool for surveilling U.S. citizens or federal personnel, TikTok has no business being on U.S. government-issued devices, she added. The legislation passed the day after Hawley, a tech critic and China hawk, announced plans for a similar measure to ban the use of TikTok by all federal employees on all federal government devices.

CANTWELL TO FCC: STEP UP ON CORONAVIRUS Senate Commerce ranking member Maria Cantwell (D-Wash.) is urging the FCC to respond to some of the challenges posed by COVID-19 just as it has in the past with disaster response and consider how the FCCs existing authority and programs, as well as temporary policies or rule waivers, may be used to secure the nations safety and continued well-being.

Examples she offers: Perhaps adopting temporary rules to let Red Cross shelters tap telemedicine subsidies; helping facilitate remote monitoring of patients, especially low-income ones; and finding ways to help spur at-home learning for students in areas where schools may be closing.

VAN HOLLEN TO PUSH FOR QUANTUM COMPUTING CASH Sen. Chris Van Hollen (D-Md.) expressed frustrations Thursday over what he sees as a dearth of proposed Commerce Department funding for the National Institute of Standards and Technologys quantum computing efforts. The good news I see in the NIST budget is youve increased the funding for AI, he told Commerce Secretary Wilbur Ross during an appropriations hearing. When it comes to quantum computing in the NIST budget, its flatlined. He also pressed Ross on Huawei, as John reported for Pros.

AIRWAVES BATTLE OVER 6 GHZ HEATS UP Lobbying continues apace over the FCCs forthcoming decision about what to do with the 6 GHz band (now occupied by utilities that fear disruption). This week saw new pushback to the wireless giants attempt to get the FCC to auction off a part of this prime mid-band spectrum for exclusive licensed use: California Democratic Reps. Anna Eshoo and Tony Crdenas along with Rep. G.K. Butterfield (D-N.C.) asked the FCC to reserve the whole swath of airwaves for unlicensed uses like Wi-Fi, as did hundreds of smaller wireless ISPs on Thursday in a letter to lawmakers.

Sen. Ted Cruz (R-Texas) may also wade in and side with the wireless industry, per a draft letter to the FCC circulating now. In this tentative draft, Cruz said making the whole band available for unlicensed use is in stark contrast with European countries that are divvying up the band for both licensed and unlicensed uses. A similar strategy could create a win-win scenario for both licensed and unlicensed users, the draft said.

And globally, speaking of 6 GHz: Grace Koh, who helped lead last years U.S. delegation to the World Radiocommunication Conference, recently said on a podcast that China had made a big push whenever she met with its officials bilaterally to see about using this 6 GHz band for 5G, largely due to interest involving Huawei. What did happen was that Huawei and Ericsson were not successful and China were not successful in getting the entire 6 GHz ban studied for 5G, she added.

DO CONSUMERS UNDERSTAND GOOGLE RESULTS? A federal appeals court grappled Thursday with whether average consumers know the difference between the ads and the organic search results that appear on Google. Arguing before the U.S. Court of Appeals for the 2nd Circuit, 1-800 Contacts which is seeking to reverse an FTC decision that its trademark agreements violated antitrust law contended that they dont understand.

Federal law allows a company to protect its trademark if use of the trademarked term could confuse consumers. The online contact lens retailer argued that consumers would be confused if they search for 1-800 Contacts on Google or Bing but instead see ads for other companies.

But two of the three judges on the panel werent so sure. Even an old guy who is old enough to remember Kodak and film knows the first four things you get on Google, which are labeled ad, you should disregard and move down to the next thing, said Circuit Judge Peter W. Hall, a George W. Bush appointee. Circuit Judge Gerard Lynch was also skeptical. Is that the standard, everyone has to know that? Twenty years from now when our kids are up there and doing this stuff, were not even going to be having this conversation, he said.

FTC attorney Imad Dean Abyad told the appeals court that 1-800 Contacts agreements with rivals were the same as an offline agreement to divide up market. 1-800 is claiming that digital [ad] space as its own exclusive territory and has agreed with its rivals that they would not advertise in that territory, he said. Abyad also said that 1-800 Contacts agreements were overly broad because they barred rivals from using the companys name in any kind of ad, even a comparative one. Courts have consistently found that comparative ads arent trademark violations. This is not about protecting trademarks, he said. This is about 1-800 protecting its much higher price.

A message from Business Roundtable:

American consumers, their devices and data constantly travel across state lines. Without a national privacy law, consumers will have inconsistent privacy protections from state to state.

Consumers deserve consistent privacy protections nationwide, no matter where they are or what theyre doing from banking or shopping online to reading the news or communicating with friends. The security of their personal data shouldnt depend on where they live, work or travel.

Thats why Business Roundtable CEOs, who operate in every sector of the U.S. economy and whose companies touch virtually every American consumer, are calling on Congress to pass a comprehensive, nationwide data privacy law.

Learn more at privacy.brt.org.

Barr named Will Levi as his new chief of staff, POLITICO reports. Frances Marshall, former senior counsel for intellectual property at the Justice Department's antitrust division, has joined Apple as senior standards counsel.

ICYMI: In a rare move to take down content posted by President Donald Trump, Facebook said it would remove ads that invoke the Census when directing people to the website of his reelection campaign, POLITICO reports.

Like Shazam, but for faces: Want to know the name of that stranger you ran into at a party? Or that person you see from across the restaurant? Theres an app for that, NYT reports thats precisely how some have used the controversial facial recognition app Clearview AI.

Coercion up close: A factory making computer screens, cameras and other gadgets for a supplier to tech companies including Apple and Lenovo relies on forced labor by Muslim ethnic Uighurs, the AP reports.

Broke: Anthony Levandowski, the self-driving engineer accused by Google of breaching his employment contract and misusing confidential information, filed for bankruptcy, citing a $179 million legal judgment, WSJ reports.

Kremlin watch: How Russia Is Trying To Boost Bernie Sanders' Campaign, via NPR.

Andrew Yangs next move: A political nonprofit called Humanity Forward, POLITICO reports. The core issues: a universal basic income for all Americans provided by the government, a human-centered economy and data as a property right, Yang said.

First Amazon, now Facebook: Facebook confirmed Thursday that a contractor at its Seattle office had been diagnosed with the coronavirus, Reuters reports.

Droppin like flies: LinkedIn joined the host of other tech companies including Facebook, Twitter, Apple and Netflix that have backed out of SXSW over coronavirus concerns, AdWeek reports. (Also scrubbed: The Red Hat Summit.)

Stars, theyre just like us: Twitter CEO Jack Dorsey may cancel his up to half-a-year sojourn in Africa over coronavirus concerns, Reuters reports. (But then again, as MT reported, theres a push right now to oust him from the helm of the company.)

Also on Twitter: The platform said it's expanding its rules against hateful conduct to include language that dehumanizes on the basis of age, disability or disease," CNET reports.

Not the Winklevoss twins: A start-up founded by two MIT researchers is suing Facebook, Reuters reports, alleging the social media giant has stolen and made public technology that could revolutionize the field of artificial intelligence.

Tips, comments, suggestions? Send them along via email to our team: Bob King (bking@politico.com, @bkingdc), Mike Farrell (mfarrell@politico.com, @mikebfarrell), Nancy Scola (nscola@politico.com, @nancyscola), Steven Overly (soverly@politico.com, @stevenoverly), John Hendel (jhendel@politico.com, @JohnHendel), Cristiano Lima (clima@politico.com, @viaCristiano), Alexandra S. Levine (alevine@politico.com, @Ali_Lev), and Leah Nylen (lnylen@politico.com, @leah_nylen).

TTYL.

Originally posted here:
EARN IT Act ignites Section 230 tug-o-war - Politico