American pop icon Akon is creating his own cryptocurrency and a city as well for its use – The Financial Express

American singer and entrepreneur Akon is launching his own cryptocurrency Akoin and building Akon City a 2,000-acre development in Senegal (a country in West Africa) that will have Akoin as its local currency, Bloomberg reported. Akon, who is of Senegalese descent, was struck with the cryptocurrency idea when a few years back he couldnt convert Senegals currency to euros during a trip from Dakar (capital of Senegal) to Paris. That really catapulted the energy to say We have to have our own currency, he told Bloomberg News. The currency is likely to be launched in early July, according to Jon Karas, President and Co-founder, Akoin.

Akon, whose full name is Aliaume Damala Badara Akon Thiam had invested in Bitcoin in 2014 and had announced potential plans for the Akon City last year. He also had a land agreement with the Senegalese government earlier this year. However, the government is neither giving funds for the city nor it has a stake in the coin, Karas said. 10 per cent of the total float of the cryptocurrency Akoin will be issued via a public sale in the beginning. This amount may change as per the demand while another 10 per cent is to be held by executives, advisers and directors of the company, as per a white paper published. Apart from Akon, Karas, the third founder of Akoin is Lynn Liss who also serves as chief operating officer. The founders are subject to a six-month lockup period. Were in this for the long-run, Karas said.

Also read:Boycotting Chinese goods? Be prepared to pay more for your next smartphone, TV, car, otheritems

Akon sold over 35 million albums globally and had spent his early childhood in Senegal before he moved to New Jersey in the US. He rose to fame in the early 2000s when his debut albumTrouble was released. Akon had 27 songs on the Billboard Hot 100 and has worked along with the likes of Lady Gaga, Eminem, Gwen Stefani etc. His cryptocurrency Akoin is been created to be a utility token (having a specific use) instead of an investment tool, Karas said.

Get live Stock Prices from BSE, NSE, US Market and latest NAV, portfolio of Mutual Funds, calculate your tax by Income Tax Calculator, know markets Top Gainers, Top Losers & Best Equity Funds. Like us on Facebook and follow us on Twitter.

Financial Express is now on Telegram. Click here to join our channel and stay updated with the latest Biz news and updates.

See the original post:
American pop icon Akon is creating his own cryptocurrency and a city as well for its use - The Financial Express

Physicists Just Quantum Teleported Information Between Particles of Matter – ScienceAlert

By making use of the 'spooky' laws behind quantum entanglement, physicists think have found a way to make information leap between a pair of electrons separated by distance.

Teleporting fundamental states between photonsmassless particles of light is quickly becoming old news, a trick we are still learning to exploit in computing and encrypted communications technology.

But what the latest research has achieved is quantum teleportation between particles of matter electrons something that could help connect quantum computing with the more traditional electronic kind.

"We provide evidence for 'entanglement swapping,' in which we create entanglement between two electrons even though the particles never interact, and 'quantum gate teleportation,' a potentially useful technique for quantum computing using teleportation," says physicist John Nichol from the University of Rochester in New York.

"Our work shows that this can be done even without photons."

Entanglement is physics jargon for what seems like a pretty straightforward concept.

If you buy a pair of shoes from a shop and leave one behind, you'll automatically know which foot it belongs to the moment you get home. The shoes are in a manner of speaking entangled.

If the shopkeeper randomly pulls out its matching partner when you return, you'll think they either remembered your sale, made a lucky guess, or were perhaps a little 'spooky' in their prediction.

The real weirdness arises when we imagine your lonely shoe as being both left and right at the same time, at least until you look at it. At that very moment, the shoe's partner back at the shop also snaps into shape, as if your sneaky peek teleported across that distance.

It's a kind of serendipitous exchange that Einstein felt was a little too spooky for comfort. Nearly a century after physicists raised the possibility, we now know teleportation between entangled particles is how the Universe works on a fundamental level.

While it's not exactly a Star Trek-type teleportation that could beam whole objects across space, the mathematics describing this information jump are mighty useful in carrying out special kinds of calculations in computing.

Typical computer logic is made up of a binary language of bits, labelled either 1s and 0s. Quantum computing is built with qubits that can occupy both states at once providing far greater possibilities that classical technology can't touch.

The problem is the Universe is like a big jumble of shoes, all threatening to turn your delicate game of 'guess which foot' into a nightmare gamble the moment any qubit interacts with its environment.

Manipulating photons to transmit their entangled states is made easier thanks to the fact they can be quickly separated at light speed over huge distances through a vacuum or down an optical fibre.

But separating entangled masses such as pairs of electrons is more of a challenge, given their clunky interactions as they bounce along are almost certain to ruin their mathematically pure quantum state.

It's a challenge well worth the effort, though.

"Individual electrons are promising qubits because they interact very easily with each other, and individual electron qubits in semiconductors are also scalable," saysNichol.

"Reliably creating long-distance interactions between electrons is essential for quantum computing."

To achieve it, the team of physicists and engineers took advantage of some strange fine print in the laws that govern the ways the fundamental particles making up atoms and molecules hold their place.

Any two electrons that share the same quantum spin state can't occupy the same spot in space. But there is a bit of a loophole that says nearby electrons can swap their spins, almost as if your feet could swap shoes if you bring them close enough.

The researchers had previously shownthat this exchange can be manipulated without needing to move the electrons at all, presenting a potential method for teleportation.

This latest advance helps bring the process closer to technological reality, overcoming hurdles that would connect quantum weirdness with existing computing technology.

"We provide evidence for 'entanglement swapping,' in which we create entanglement between two electrons even though the particles never interact, and 'quantum gate teleportation,' a potentially useful technique for quantum computing using teleportation," says Nichol.

"Our work shows that this can be done even without photons."

Of course, we're still some way off replacing photons with electrons for this kind of quantum information transfer. The researchers haven't gone as far as measuring the states of electrons themselves, meaning there could still be all kinds of interference to iron out.

But having strong evidence of the possibility of teleportation between electrons is an encouraging sign of the possibilities open to future engineers.

This research was published in Nature Communications.

View original post here:
Physicists Just Quantum Teleported Information Between Particles of Matter - ScienceAlert

This Week’s Awesome Tech Stories From Around the Web (Through June 27) – Singularity Hub

AUTOMATION

Amazon Shakes Up the Race for Self-Drivingand Ride-HailingAarian Marshal | WiredUber CEO Dara Khosrowshahi says his company wants to be the Amazon for transportation. Friday, Amazon made clear that it intends to be the Amazon for transportation. The ecommerce giant said it hadagreed to acquireBay Area-based autonomous vehicle company Zoox, a dealreportedly worth more than $1 billion.

Wrongfully Accused by an AlgorithmKashmir Hill | The New York TimesMr. Williams knew that he had not committed the crime in question. What he could not have known, as he sat in the interrogation room, is that his case may be the first known account of an American being wrongfully arrested based on a flawed match from a facial recognition algorithm, according to experts on technology and the law.

Meet Silq: The First Intuitive Programming Language for Quantum ComputersLuke Dormehl | Digital TrendsThe creation of the C programming language was a massive milestone for classical computing. It was easy, intuitive, and helped open up computer programming to an entirely new audience. Now, nearly 50 years after C was created, computer scientists have reached a similar milestone: A new programming language that brings the same level of coding simplicity to quantum computing.

How Green Sand Could Capture Billions of Tons of Carbon DioxideJames Temple | MIT Technology ReviewThis process, along with other forms of whats known as enhanced mineral weathering, could potentially store hundreds of trillions of tons of carbon dioxide, according toa National Academies report last year. Thats far more carbon dioxide than humans have pumped out since the start of the Industrial Revolution.

Scientists Made a List of Every Place Aliens Could Be HidingGeorge Dvorsky | GizmodoTheExotica Catalog further signifies the ongoing shift away from traditional SETI strategies, in which scientists search for familiar alien signatures (such as radio emissions), and the shift toward Dysonian SETI, in which scientists look for extraterrestrial technosignatures, that is, signs of alien technology: stuff like Dyson shells (a star surrounded by solar panels), industrial waste, gigantic space habitats, beacons, and things we cant even imagine.

The Rocket Motor of the Future Breathes Air Like a Jet EngineDaniel Oberhaus | WiredWhile a conventional rocket engine must carry giant tanks of fuel and oxidizer on its journey to space, an air-breathing rocket motor pulls most of its oxidizer directly from the atmosphere. This means that an air-breathing rocket can lift more stuff with less propellant and drastically lower the cost of space accessat least in theory.

$100 Billion Universal Fiber Plan Proposed by Democrats in CongressJon Brodkin | Ars Technica[Electronic Frontier Foundation Senior Legislative Counsel Ernesto Falcon] argues that a plan like Clyburns is needed for the US to deploy fiber throughout the country within a few years instead of decades. Such an ambitious program would have the United States match Chinas efforts to build universal fiber with the US completing its transition just a few short years after China, Falcon wrote. Without this law, the transition would take decades.

Does Dark Matter Exist?Ramin Skibba | Aeonover the past half century, no one has ever directly detected a single particle of dark matter. Over and over again, dark matter has resisted being pinned down, like a fleeting shadow in the woods. And as long as its not found, its still possible that there is no dark matter at all. An alternative remains: instead of huge amounts of hidden matter, some mysterious aspect of gravity could be warping the cosmos instead.

Image credit: twk tt /Unsplash

Read the rest here:
This Week's Awesome Tech Stories From Around the Web (Through June 27) - Singularity Hub

Quantum entanglement demonstrated on orbiting CubeSat – University of Strathclyde

25 June 2020

In a critical step toward creating a global quantum communications network, researchers have generated and detected quantum entanglement onboard a CubeSat nanosatellite weighing less than 2.6 kg and orbiting the Earth.

The University of Strathclyde is involved in an international team which has demonstrated that their miniaturised source of quantum entanglement can operate successfully in space aboard a low-resource, cost-effective CubeSat that is smaller than a shoebox. CubeSats are a standard type of nanosatellite made of multiples of 10 cm 10 cm 10 cm cubic units.

The quantum mechanical phenomenon known as entanglement is essential to many quantum communications applications. However, creating a global network for entanglement distribution is not possible with optical fibers because of the optical losses that occur over long distances. Equipping small, standardised satellites in space with quantum instrumentation is one way to tackle this challenge in a cost-effective manner.

The research, led by the National University of Singapore, has been published in the journal Optica.

Dr Daniel Oi, a Senior Lecturer in Strathclydes Department of Physics, is the Universitys lead on the research. He said: This research has tested next generation quantum communication technologies for use in space. With the results confirmed, its success bodes well for forthcoming missions, for which we are developing the next enhanced version of these instruments.

As a first step, the researchers needed to demonstrate that a miniaturised photon source for quantum entanglement could stay intact through the stresses of launch and operate successfully in the harsh environment of space within a satellite that can provide minimal power. To accomplish this, they exhaustively examined every component of the photon-pair source used to generate quantum entanglement to see if it could be made smaller or more rugged.

The new miniaturised photon-pair source consists of a blue laser diode that shines on nonlinear crystals to create pairs of photons. Achieving high-quality entanglement required a complete redesign of the mounts that align the nonlinear crystals with high precision and stability.

The researchers qualified their new instrument for space by testing its ability to withstand the vibration and thermal changes experienced during a rocket launch and in-space operation. The photon-pair source maintained very high-quality entanglement throughout the testing and crystal alignment was preserved, even after repeated temperature cycling from -10 C to 40 C.

The researchers incorporated their new instrument into SpooQy-1, a CubeSat that was deployed into orbit from the International Space Station on 17 June 2019. The instrument successfully generated entangled photon-pairs over temperatures from 16 C to 21.5 C.

The researchers are now working with RAL Space in the UK to design and build a quantum nanosatellite similar to SpooQy-1 with the capabilities needed to beam entangled photons from space to a ground receiver. This is slated for demonstration aboard a 2022 mission. They are also collaborating with other teams to improve the ability of CubeSats to support quantum networks.

Strathclyde is the only academic institution that has been a partner in all four EPSRC funded Quantum Technology Hubs in both phases of funding. The Hubs are in Sensing and Timing, Quantum Enhanced Imaging, Quantum Computing and Simulation and Quantum Communications Technologies. Dr Oi is Strathclydes lead on a forthcoming CubeSat mission being developed by the Quantum Communications Technologies Hub.

Dr Oi is also Chief Scientific Officer with Craft Prospect, a space engineering practice that delivers mission-enabling products and develops novel mission applications for small space missions. The company is based in the Tontine Building in the Glasgow City Innovation District, which is transforming the way academia, business and industry collaborate to bring competitive advantage to Scotland.

Visit link:
Quantum entanglement demonstrated on orbiting CubeSat - University of Strathclyde

THE REVELATIONS OF WIKILEAKS: No. 7 Crimes Revealed at Guantnamo Bay – Consortium News

Gitmo Files lifted the Pentagons lid on the prison, describing a corrupt system of military detention resting on torture, coerced testimony and intelligence manipulated to justify abuses at the base, writes Patrick Lawrence.

Today we continue our series The Revelations of WikiLeaks less than three months before the extradition hearing for imprisoned WikiLeaks publisher Julian Assange resumes in Britain. This is the seventh in a series of articles that is looking back on the major works of the publication that has altered the world since its founding in 2006. The series is an effort to counter mainstream media coverage, which these days largely ignores WikiLeaks work, and instead focuses on Assanges personality. It is WikiLeaks uncovering of governments crimes and corruption that set the U.S. after Assange, ultimately leading to his arrest on April 11 last year and his indictment under the U.S. Espionage Act.

The Anatomy of a Colossal CrimePerpetrated by the U.S. Government

By Patrick LawrenceSpecial to Consortium News

WikiLeaks released a cache of classified documents on April 25, 2011, it called Gitmo Files. They consist of reports the Joint Task Force at Guantnamo Bay sent to the Southern Command in Miami, under which JTFGitmo had imprisoned and interrogated suspected terrorists since January 2002, four months after the Sept. 11 attacks in New York and Washington.

These memoranda, known as Detainee Assessment Briefs, or DABs, were written from 2002 to 2008. They contain JTFGitmos detailed judgments as to whether a prisoner should remain in prison or be released either to his home government or to a third country. Of the 779 prisoners detained at Guantnamo at its postSept. 11 peak, Gitmo Files is comprised of DABs on 765 of them. None had previously been made public. As was WikiLeaks practice, it gave numerous news organizations access to Gitmo Files at the time of publication.

Prior to the WikiLeaks release, very little was known about the prison operation at the U.S. naval base on the southeastern coast of Cuba. In 2006, in response to a Freedom of Information suit filed by The Associated Press four years earlier, the Pentagon made public transcripts of military court hearings held at Guantnamo Bay. While these revealed the identities of some detainees for the first time, they contained little detail of how those imprisoned were treated, interrogated, and then judged.

Gitmo Files thus lifted the lid on a Defense Department operation that had been shrouded in secrecy for the previous nine years. They describe a profoundly corrupt system of military detention and interrogation that rested on torture, coerced testimony, and intelligence manipulated to justify the militarys practices at the Guantnamo base.

Most of these documents reveal accounts of incompetence familiar to those who have studied Guantnamo closely, wrote Andy Worthington, a WikiLeaks associate who managed the publishers analysis of the documents, with innocent men detained by mistake (or because the U.S. was offering substantial bounties to its allies for alQaeda or Taliban suspects), and numerous insignificant Taliban conscripts from Afghanistan and Pakistan. Worthington called the 765 documents WikiLeaks published the anatomy of a colossal crime perpetrated by the U.S. government.

Obamas First Term

President Barack Obama and First Lady Michelle Obama during the inaugural parade, Washington, D.C., Jan. 20, 2009. (DoD, Chad J. McNeeley)

Barack Obama had begun his first term as president slightly more than two years before WikiLeaks published Gitmo Files. During his political campaign he had promised to close the facility within a year of assuming office; at that time 241 prisoners were still in detention. An interagency Guantnamo Review Task Force Obama appointed to review these cases concluded that only 36 could be prosecuted.

But Obama succumbed to the politics of fear in Congress, as Worthington puts it. There were still 171 prisoners when Gitmo Files was published; 40 now remain some cleared and awaiting release, some charged and awaiting military trial, some convicted, and others, 26 of the total, under indefinite detention.

The Documents

The memoranda collected in Gitmo Files shine a revealing light into the U.S. militarys system of arrest, detention, and interrogation of terror suspects after the Sept. 11 tragedies. The files include the DABs covering the first 201 prisoners released from Guantnamo, between 2002 and 2004. Nothing had previously been known about these detainees. The military briefs on these cases recount the histories of innocent Afghans, Pakistanis, and others a baker, a mechanic, former students, kitchen workers who should never have been detained in the first place.

Exercise area in Guantnamo Bay, Cuba, December 2002. (U.S. government, Wikimedia Commons)

These early-release detainees were among the easiest to identify as posing low or no security risks. Their stories reflect the indiscriminate method of arrests U.S. forces used immediately after the Sept. 11 attacks. Gitmo Files terms these detainees The Unknown Prisoners of Guantnamo because no record of their presence at Gitmo had been made public prior to the April 2011 release.

They were effectively disappeared unacknowledged detainees apparently because their patent innocence was an embarrassment for the Pentagon and, especially, those operating the Guantnamo prison.

Azizullah Asekzai was one of these early-release detainees. He was a family farmer in his early twenties when the Taliban conscripted him to fight its cause in Afghanistan. After one day of training on an AK47, Asekzai attempted to escape to Kabul, but a local militia ambushed the vehicle he was traveling in and Asekzai was captured. He was subsequently turned over to U.S. forces; he was transferred to Guantnamo in June 2002.

Asekzais DAB explains his transfer thus:

The detainee was arrested and transported to Bamian, where he was imprisoned for almost five months before being transferred to U.S. forces. Detainee was subsequently transported to Guantnamo Bay Naval Base because of his knowledge of a Taliban draftee holding area in Konduz and of Mullah Mir Hamza, a Taliban official, in Gereshk District of Helmand Province. Joint Task Force Guantnamo considers the information obtained from him and about him as neither valuable nor tactically exploitable. [Italics added.]

Asekzais DAB is dated March 2003, and he was released the following July. While his time at Guantnamo was relatively brief, his story is important because of the light it sheds on how those writing DABs manipulated the facts in case after case to mask what amounted to a dragnet method of arrests in Afghanistan. In Asekzais case, as in many others, this meant making up the militarys motives to obscure the groundless basis for his detention and transfer to Guantnamo.

Here is an explanatory comment Wikileaks included with its Unknown Prisoners files:

The Reasons for Transfer included in the documents, which have been repeatedly cited by media outlets as an explanation of why the prisoners were transferred to Guantnamo, are, in fact, lies that were grafted onto the prisoners files after their arrival at Guantnamo. This is because, contrary to the impression given in the files, no significant screening process took place before the prisoners transfer[s]. Every prisoner who ended up in U.S. custody had to be sent to Guantnamo, even though the majority were not even seized by U.S. forces, but were seized by their Afghan and Pakistani allies at a time when substantial bounty payments for alQaeda and Taliban suspects were widespread.

These bounty payments were not limited to small-time Afghan or Pakistani bounty hunters. In his 2006 memoir, In the Line of Fire, Pervez Musharrif, Pakistans former president, acknowledges that in handing over 369 terror suspects to the U.S., the Pakistani government earned bounty payments totaling millions of dollars.

Gitmo Files also includes a section on the 22 children also detained at Guantnamo after it opened. Three were still in detention at the time of the WikiLeaks release. In addition, the documents detail the cases of the 399 prisoners released from 2004 to the day Gitmo Files was published. They also give the background of the seven men who had died at Guantnamo by April 2011.

Some of the original detainees jailed at the Guantanamo Bay prison, Jan. 11, 2002. (Defense Department, Shane T. McCoy, U.S. Navy)

Each DAB is signed by the Guantnamo commander at the time of the report. While they included JTFGitmos assessment and recommendation for each prisoner, the disposition of each case was determined at a higher level. In addition to the judgments of JTFGitmo, the DABs also reflect the work of the Criminal Investigation Task Force, the postSept. 11 Pentagon agency created to conduct interrogations, and the behavior science teams, or BSCTs.

These were the now-infamous psychologists who participated in the exploitation of prisoners during interrogations in many cases condoning the use of waterboarding and other forms of torture.

JTFGitmos standard practice was to present each DAB in nine sections. These begin with a detainees identity and personal background and run to his health, the detainees account of events, an evaluation of this account, and the JTFGitmo assessment and recommendation of each case. Worthington has scrutinized each of these sections in the DABs to unearth information that might otherwise remain obscured. On the section covering the health of detainees, for instance, he writes, Many are judged to be in good health, but there are some shocking examples of prisoners with severe mental and/or physical problems.

Capture Information

Joint Task Force Guantanamo seal. (Wikimedia Commons)

In the sections labeled capture information, the DABs report how and where each prisoner was apprehended, the date of his transfer to Guantnamo, and the above-noted reasons for transfer. Worthington terms these last accounts spurious, offering this explanation: The reason that this is unconvincing is because the U.S. high command, based in Camp Doha, Kuwait, stipulated that every prisoner who ended up in U.S. custody had to be transferred to Guantnamo and that there were no exceptions.

This is why those writing DABs found it necessary to doctor the reasons for transfer, as an attempt to justify the largely random rounding-up of prisoners, as Worthington puts it.

The last section of a DAB is called EC status and explains whether or not a detainee is still considered an enemy combatant. These judgments are based on military tribunals held at Guantnamo in 200405. Worthington writes, Out of 558 cases, just 38 prisoners were assessed as being no longer enemy combatants, and in some cases, when the result went in the prisoners favor, the military convened new panels until it got the desired result.

Worthingtons work on Gitmo Files is key to an adequate understanding of the 765 DABs covered in the WikiLeaks release. Read on their own, the militarys briefs appear to be routine bureaucratic accounts of the processing of each prisoner. But as Worthington explains, these documents are essentially whitewashes that often obscure more than they reveal. As noted, explanations of the intelligence used to justify the prisoners detention were often concocted and inserted into a prisoners record after he was arrested and sent to Guantnamo.

Ghost Prisoners

Detainees being moved to new living quarters, February 2003. (U.S. Navy, John F. Williams)

Another significant flaw Worthington identifies is the JTFGitmos repeated use of the same witnesses to testify against numerous prisoners in the case of one witness, 60 of them. Worthington identifies many of these repeat witnesses as high-value detainees, or ghost prisoners, in Guantnamo parlance, and details their histories in confinement.

As he explains,

The documents draw on the testimony of witnessesin most cases, the prisoners fellow prisonerswhose words are unreliable, either because they were subjected to torture or other forms of coercion (sometimes not in Guantnamo, but in secret prisons run by the CIA), or because they provided false statements to secure better treatment in Guantnamo.

Equally important, in many of the DABs perhaps most of them it is difficult to detect the prisoners true histories, which in the majority of cases reveal their innocence and the injustice of their imprisonment. This is why Worthingtons work on Gitmo Files was an essential part of WikiLeaks method. He spent long months analyzing the documents; in some cases, Worthington found and interviewed released detainees to get their accurate accounts of events on the record. He then wrote a lengthy series of articles explaining his findings.

These voluminous writings are featured prominently on the Gitmo Files website. They are effectively a gateway into the inventory of the DABs that comprise Gitmo Files. Worthingtons Unknown Prisoners report comprises a 10part series of articles. Worthingtons work, including his book, The Guantnamo Files, is noted in his introductory essays for each of the categories he uses to classify Guantnamo detainees.

Another of these categories, titled Abandoned in Guantnamo, concerns the 89 Yemenis still in detention at Guantnamo when Gitmo Files was publishedmore than half of those remaining. President Obamas Guantnamo Review Task Force, named in 2009, recommended that 36 Yemenis be released immediately and 30 others be held in conditional detention until Yemens security situation improved.

As Worthington notes, most of the Yemenis remained in prison at the time he wrote. Of those Yemenis still in detention, 28 had already been cleared for release. Of them, six had been approved for transfer, as the task force put it, as early as 2004, three more in 2006, and 10 in 2007.

Gitmo Files details the cases of 19 Yemenis still detained in 2011. Most of these were assessed as low-ranking Taliban or Al Qaeda infantry soldiers of no intelligence value. Saeed Hatim (known in his DAB as Said Muhammad Salih Hatim), was among these 19. Born in 1976, Hatim began studying law in Sanaa in 1998. After two years he dropped out to care for his ailing father. Here is a portion of Hatims own account as written into his DAB:

Detainee was concerned by Russias war in Chechnya after he witnessed the oppression [of the Muslims] on television. Detainee was outraged about what the Russians were doing to the Chechens, and decided to travel to Chechnya to fight jihad alongside his Muslim brothers. Detainee informed his family of his decision to travel to Chechnya and they refused to provide financial assistance. Detainee then spoke with several of his friends and members of his mosque, who agreed to help detainee raise money for the trip. Detainee left for Afghanistan in approximately March 2001.

Hatims DAB says he admitted that Al Qaeda recruited him after his time in Chechnya. He purportedly fought U.S. forces in a major battle in the Afghan mountains at the end of 2001. JTFGitmo assessed Hatim as a medium risk, but it classified him as a low threat from a detention perspective and of low intelligence value.

Hatim was first recommended for release in January 2007. He was similarly recommended a year later; a habeas corpus petition his attorney subsequently filed was granted in 2009. That judgment was vacated shortly before Gitmo Files was released in 2011.

Here is the relevant portion of Worthingtons report and analysis of the Hatim case:

In Saeed Hatims case Judge Ricardo Urbina ruled out self-incriminating statements made by Hatim himself, accepting that he made them while being mistreated and threatened with torture in Kandahar after his capture, and also that he repeated them at Guantnamo because he feared that he would be punished if he changed his story.

Judge Urbina also ruled out the governments major claim against Hatim that he had taken part in a showdown between Al Qaeda and U.S. forces in Afghanistans Tora Bora mountains in December 2001 because the only source for that claim was one of the notoriously unreliable witnesses identified in the WikiLeaks documents, who, in Judge Urbinas words, has exhibited an ongoing pattern of severe psychological problems while detained at Gitmo.

Quoting an interrogator, the judge also noted that hospital records at Guantnamo said the witness against Hatim had vague auditory hallucinations and that his symptoms were consistent with a depressive disorder, psychosis, post-traumatic stress, and a severe personality disorder. The interrogator concluded by refus[ing] to credit what is arguably the governments most serious allegation in this case based solely on one statement, made years after the events in question, by an individual whose grasp on reality appears to have been tenuous at best.

US Officials React

Pentagon Press Secretary Geoff Morrell Morrell in 2005. (Cherie Cullen, U.S. Armed Forces, Wikimedia Commons)

Official reactions to the release of Gitmo Files were by and large predictable. The Obama administrations statement, released by Geoff Morrell, the Pentagon press secretary, and Daniel Fried, Obamas special envoy on detainee issues, asserted, It is unfortunate that several news organizations have made the decision to publish numerous documents obtained illegally by WikiLeaks concerning the Guantnamo detention facility.

Referring to Obama and George W. Bush, his predecessor, Morrell and Fried also said, Both administrations have made the protection of American citizens the top priority and we are concerned that the disclosure of these documents could be damaging to those efforts.

Significantly, there is no record of the presidents response to the release.

The Pentagon came under special criticism with the Gitmo releases revelation of the detention of 22 children at Guantnamo. As Worthington explains, in May 2008 the Pentagon had reported to the U.N. Committee on the Rights of the Child that it had held only eight juveniles (those under 18 when their alleged transgressions took place) since Guantnamo began receiving detainees in 2002.

Worthington took the occasion to elaborate on the Gitmo Files disclosure. In his commentary he wrote: My new research coincides with a new report by the UC Davis Center for the Study of Human Rights in the Americas, Guantnamos Children: The WikiLeaked Testimonies, drawing on the release, by WikiLeaks, of classified military documents shedding new light on the prisoners, identifying 15 juveniles, and suggesting that six others, born in 1984 or 1985, and arriving at Guantnamo in 2002 or 2003, may have been under 18, depending on when exactly they were born (which is unknown, as it is in the cases of numerous Guantnamo prisoners).

In total, Worthington asserted, the number of children imprisoned at Guantnamo may have been as many as 28.

Like the president, the Pentagon remained silent on this question after Gitmo Files was published. There is no record of a Defense Department response to the WikiLeaks disclosures concerning children and Worthingtons analysis of them.

In April 2019 eight years after Gitmo Files was published military courts continued to grapple with the record of events, specifically the use of torture, during the postSept. 11 war on terror.

In a report datelined April 5, 2019 The New York Times explained,

Seventeen-and-a-half years after the Sept. 11, 2001, terror attacks, and a decade after President Barack Obama ordered the C.I.A. to dismantle any remnants of its global prison network, the military commission system is still wrestling with how to handle evidence of what the United States did to the Qaeda suspects it held at C.I.A. black sites. While the topic of torture can now be discussed in open court, there is still a dispute about how evidence of it can be gathered and used in the proceedings at Guantnamo Bay, Cuba.

This week the Justice Department filed a new indictment against Assange, superseding that filed in May 2019 and broadening the charges lodged against him last year. This is the most recent official reaction to Gitmo Files. This latest indictment, presented in the Eastern Virginia District Court and dated June 24, alleges that Chelsea Manning produced Gitmo Files at Assanges urging between November 2009 and May 2010. In keeping withWikiLeaks most fundamental principle, it has never disclosed the source of Gitmo Files. Neither has Manning stated that she was the source, although this has been widely considered as likely.

Proving that Assange actively solicited the documents Manning passed to WikiLeaksCollateral Murder, Afghan War Diary, Iraq War Logs, and now, allegedly, Gitmo Filesis key to the U.S. case against Assange under the Espionage Act.

The June 24 court document indicates that the Justice Department has no hard evidence of this charge. Manning continues to assert, as she has since her arrest in May 2010, that she acted of her own volition in gathering and dispatching the documentsWikiLeakspublished. The indictment alleges only that Manning, in assembling what became Gitmo Files, used certain search phrasesdetainee+abuse, for examplethat the indictment identifies with WikiLeaks categorization of documentsan allegation far short of accepted standards of proof.

Press Reaction

On the Gitmo Files home page, WikiLeaks names 10 partners with which it worked in making the documents public. Worthington is listed as one, though his work puts him in a category of his own. The others include The Washington Post, The Telegraph, La Repubblica, Le Monde, and Der Spiegel. These news outlets were given copies of Gitmo Files in advance to allow them time to review and analyze the documents and plan their coverage prior to the April 25, 2011, release.

Conspicuously missing from this WikiLeaks list, and reflecting a prior dispute they had with Julian Assange, are The New York Times and The Guardian. Both newspapers obtained the documents from a source other than WikiLeaks, presumably one of the news outlets on the WikiLeaks list of partners. To its credit, The Times now maintains a web site, The Guantnamo Docket giving the name and legal status of each detainee still in custody at Guantnamo.

The noteworthy aspect of the media coverage of the Gitmo Files release was the marked difference in the way U.S. and nonAmerican news outlets shaped their stories: U.S. media tended to emphasize the dangers and threats presented by those in captivity at Guantnamo; other media reported correctly that among the important revelations in Gitmo Files was the innocence of most of those seized and detained.

Noting this pattern, WikiLeaks urged readers and viewers to compare the lead paragraphs in the main BBC and CNN stories:

The BBC, under the headline, WikiLeaks: Many at Guantnamo not dangerous, reported, Files obtained by the website WikiLeaks have revealed that the U.S. believed many of those held at Guantnamo Bay were innocent or only low-level operatives.

CNNs report appeared under the headline, Military documents reveal details about Guantnamo detainees, alQaeda, and began, Nearly 800 classified U.S. military documents obtained by WikiLeaks reveal extraordinary details about the alleged terrorist activities of alQaeda operatives captured and housed at the U.S. Navys detention facility in Guantanamo Bay, Cuba.

Among the others to note this disparity were Glenn Greenwald, then the foreign affairs columnist at Salon, and Laura Flanders at The Nation. Greenwalds piece on the news coverage of Gitmo Files appeared under the headline, Newly Leaked Documents Show the Ongoing Travesty of Guantnamo but is no longer available in the Salon archives.

Flanders detected the same bias in the coverage published by The Washington Post, National Public Radio, and the Times. The latter two use the cop-out term harsh interrogation techniques, she noted, to avoid mention of the word torture.

So the takeaway in the United States, Flanders wrote, will remain dangerous terrorists! and Guantnamo will most likely remain open three years after the president vowed to close it, while overseas the rest of the world will continue to wonder why the country that claims to love freedom so much is continuing to imprison and torture innocent people.

In one of the essays WikiLeaks published with Gitmo Files, Worthington analyzed the broader significance of the tilt in American coverage. He wrote:

The release of the documents prompted international interest for a week, until it was arranged by President Obama (whether coincidentally or not) for U.S. Special Forces to fly into Pakistan to assassinate Osama bin Laden. At this point an unprincipled narrative emerged in the mainstream media in the U.S., in which, for sales and ratings if nothing else, unindicted criminals from the Bush administration and their vociferous supporters in Congress, in newspaper columns, and on the airwaves were allowed to suggest that the use of torture had led to locating bin Laden (it hadnt, although some information had apparently come from high-value detainees held in secret CIA prisons, but not as a result of torture), and that the existence of Guantnamo had also proved invaluable in tracking down the al-Qaeda chief.

Patrick Lawrence, a correspondent abroad for many years, chiefly for theInternational Herald Tribune, is a columnist, essayist, author and lecturer. His most recent book is Time No Longer: Americans After the American Century (Yale). Follow him on Twitter@thefloutist.His web site isPatrickLawrence. Support his work viahis Patreon site.

Please Contributeto ConsortiumNews on its 25th Anniversary

Donate securely with PayPal here.

Or securely by credit card or check by clicking the red button:

Read the original:
THE REVELATIONS OF WIKILEAKS: No. 7 Crimes Revealed at Guantnamo Bay - Consortium News

How conspiracy theories emerge — and how their storylines fall apart – Newswise

Newswise A new studyby UCLA professors offers a new way to understand how unfounded conspiracy theories emerge online. The research, which combines sophisticated artificial intelligence and a deep knowledge of how folklore is structured, explains how unrelated facts and false information can connect into a narrative framework that would quickly fall apart if some of those elements are taken out of the mix.

The authors, from the UCLA College and the UCLA Samueli School of Engineering, illustrated the difference in the storytelling elements of a debunked conspiracy theory and those that emerged when journalists covered an actual event in the news media. Their approach could help shed light on how and why other conspiracy theories, including those around COVID-19, spread -- even in the absence of facts.

The study, published in the journalPLOS ONE, analyzed the spread of news about the 2013 "Bridgegate" scandal in New Jersey -- an actual conspiracy -- and the spread of misinformation about the 2016 "Pizzagate" myth, the completely fabricated conspiracy theory that a Washington, D.C., pizza restaurant was the center of a child sex-trafficking ring that involved prominent Democratic Party officials, including Hillary Clinton.

The researchers used machine learning, a form of artificial intelligence, to analyze the information that spread online about the Pizzagate story. The AI automatically can tease out all of the people, places, things and organizations in a story spreading online -- whether the story is true or fabricated -- and identify how they are related to each other.

Finding the puzzle pieces

In either case -- whether for a conspiracy theory or an actual news story -- the narrative framework is established by the relationships among all of the elements of the storyline. And, it turns out, conspiracy theories tend to form around certain elements that act as the adhesive holding the facts and characters together.

"Finding narratives hidden in social media forums is like solving a huge jigsaw puzzle, with the added complication of noise, where many of the pieces are just irrelevant," said Vwani Roychowdhury, a UCLA professor of electrical and computer engineering and an expert in machine learning, and a lead author of the paper.

In recent years, researchers have made great strides in developing artificial intelligence tools that can analyze batches of text and identify the pieces to those puzzles. As the AI learns to identify patterns, identities and interactions that are embedded in words and phrases, the narratives begin to make "sense." Drawing from the massive amount of data available on social media, and because of improving technology, the systems are increasingly able to teach themselves to "read" narratives, almost as if they were human.

The visual representations of those story frameworks showed the researchers how false conspiracy theory narratives are held together by threads that connect multiple characters, places and things. But they found that if even one of those threads is cut, the other elements often can't form a coherent story without it.

"One of the characteristics of a conspiracy theory narrative framework is that it is easily 'disconnected,'" said Timothy Tangherlini, one of the paper's lead authors, a professor in the UCLA Scandinavian section whose scholarship focuses on folklore, legend and popular culture. "If you take out one of the characters or story elements of a conspiracy theory, the connections between the other elements of the story fall apart."

Which elements stick?

In contrast, he said, the stories around actual conspiracies -- because they're true -- tend to stand up even if any given element of the story is removed from the framework. Consider Bridgegate, for example, in which New Jersey officials closed several lanes of the George Washington Bridge for politically motivated reasons. Even if any number of threads were removed from the news coverage of the scandal, the story would have held together: All of the characters involved had multiple points of connection by way of their roles in New Jersey politics.

"They are all within the same domain, in this case New Jersey politics, which will continue to exist irrespective of the deletions," Tangherlini said. "Those connections don't require the same 'glue' that a conspiracy theory does."

Tangherlini calls himself a "computational folklorist." Over the past several years, he has collaborated regularly with Roychowdhury to better understand the spread of information around hot-button issues like the anti-vaccination movement.

To analyze Pizzagate, in which the conspiracy theory arose from a creative interpretation of hacked emails released in 2016 by Wikileaks, the researchers analyzed nearly 18,000 posts from April 2016 through February 2018 from discussion boards on the websites Reddit and Voat.

"When we looked at the layers and structure of the narrative about Pizzagate, we found that if you take out Wikileaks as one of the elements in the story, the rest of the connections don't hold up," Tangherlini said. "In this conspiracy, the Wikileaks email dump and how theorists creatively interpreted the content of what was in the emails are the only glue holding the conspiracy together."

The data generated by the AI analysis enabled the researchers to produce a graphic representation of narratives, with layers for major subplots of each story, and lines connecting the key people, places and institutions within and among those layers.

Quick build versus slow burn

Another difference that emerged between real and false narratives concerned the time they take to build. Narrative structures around conspiracy theories tend to build and become stable quickly, while narrative frameworks around actual conspiracies can take years to emerge, Tangherlini said. For example, the narrative framework of Pizzagate stabilized within a month after the Wikileaks dump, and it stayed relatively consistent over the next three years.

"The fact that additional information related to an actual conspiracy emerged over a prolonged period of time (here five and half years) might be one of the telltale signs of distinguishing a conspiracy from a conspiracy theory," the authors wrote in the study.

Tangherlini said it's becoming increasingly important to understand how conspiracy theories abound, in part because stories like Pizzagate have inspired some to take actions that endanger other people.

"The threat narratives found in conspiracy theories can imply or present strategies that encourage people to take real-world action," he said. "Edgar Welch went to that Washington pizzeria with a gun looking for supposed caves hiding victims of sex trafficking."

The UCLA researchers have also written another paper examining the narrative frameworks surrounding conspiracy theories related to COVID-19. In that study, which has been published on an open-source forum, they track how the conspiracy theories are being layered on to previously circulated conspiracy theories such as those about the perceived danger of vaccines, and, in other cases how the pandemic has given rise to completely new ones, like the idea that 5G cellular networks spread the coronavirus.

"We're using the same pipeline on COVID-19 discussions as we did for Pizzagate," Tangherlini said. "In Pizzagate, the targets were more limited, and the conspiracy theory stabilized rapidly. With COVID-19, there are many competing conspiracy theories, and we are tracing the alignment of multiple, smaller conspiracy theories into larger ones. But the underlying theory is identical for all conspiracy theories."

###

More:
How conspiracy theories emerge -- and how their storylines fall apart - Newswise

Machine Learning as a Service (MLaaS) Market Overview, Cost Structure Analysis, Growth Opportunities and Forecast to 2026 – 3rd Watch News

Machine Learning as a Service (MLaaS) Market (2020)Report Provides an in-depth summary of Machine Learning as a Service (MLaaS) Market Status as well as Product Specification, Technology Development, and Key Manufacturers. The Report Gives Detail Analysis on Market concern Like Machine Learning as a Service (MLaaS) Market share, CAGR Status, Market demand and up to date Market Trends with key Market segments.

You Keep Your Social Distance And We Provide You A SocialDISCOUNTUseQUARANTINEDAYSCodeIn Precise Requirement AndGetFLAT $ 1,000 OFFOn AllCMI Reports

This Report Sample Includes

Get Sample Copy Of This Report @ https://www.coherentmarketinsights.com/insight/request-sample/3718

Analysis tools such as SWOT analysis and Porters five force model have been inculcated in order to present a perfect in-depth knowledge about Machine Learning as a Service (MLaaS) Market. tables, charts are added to help have an accurate understanding of this Machine Learning as a Service (MLaaS) Market. The Machine Learning as a Service (MLaaS) Market is also been analyzed in terms of value chain analysis and regulatory analysis.

Key players in global Machine Learning as a Service (MLaaS) Market include:H2O.ai, Google Inc., Predictron Labs Ltd, IBM Corporation, Ersatz Labs Inc., Microsoft Corporation, Yottamine Analytics, Amazon Web Services Inc., FICO, and BigML Inc.

Geographical Analysis:

The study details country-level aspects based on each segment and gives estimates in terms of market size. The key regional trends beneficial to the growth of the Machine Learning as a Service (MLaaS) market are discussed. Further, it analyzes the market potential for every nation. Geographic segmentation covered in the market report:

The study is a source of reliable data on:

What insights readers can gather from the Machine Learning as a Service (MLaaS) market report?

Note: Request Discount option enables you to get the discounts on the actual price of the report. Kindly fill the form, and one of our consultants would get in touch with you to discuss your allocated budget, and would provide discounts.

UseQUARANTINEDAYSCode In Precise Requirement And GetFLAT $OFFOnThisReports

Ask Discount Before Purchasing @ https://www.coherentmarketinsights.com/insight/request-discount/3718

The Machine Learning as a Service (MLaaS) market report answers the following queries:

In this study, the years considered to estimate the market size of Machine Learning as a Service (MLaaS) Market are as follows:

Aslo Checkout our latest Blog at: http://bit.ly/Sumit

Link:
Machine Learning as a Service (MLaaS) Market Overview, Cost Structure Analysis, Growth Opportunities and Forecast to 2026 - 3rd Watch News

If AI is going to help us in a crisis, we need a new kind of ethics – MIT Technology Review

What opportunities have we missed by not having these procedures in place?

Its easy to overhype whats possible, and AI was probably never going to play a huge role in this crisis. Machine-learning systems are not mature enough.

But there are a handful of cases in which AI is being tested for medical diagnosis or for resource allocation across hospitals. We might have been able to use those sorts of systems more widely, reducing some of the load on health care, had they been designed from the start with ethics in mind.

With resource allocation in particular, you are deciding which patients are highest priority. You need an ethical framework built in before you use AI to help with those kinds of decisions.

So is ethics for urgency simply a call to make existing AI ethics better?

Thats part of it. The fact that we dont have robust, practical processes for AI ethics makes things more difficult in a crisis scenario. But in times like this you also have greater need for transparency. People talk a lot about the lack of transparency with machine-learning systems as black boxes. But there is another kind of transparency, concerning how the systems are used.

This is especially important in a crisis, when governments and organizations are making urgent decisions that involve trade-offs. Whose health do you prioritize? How do you save lives without destroying the economy? If an AI is being used in public decision-making, transparency is more important than ever.

What needs to change?

We need to think about ethics differently. It shouldnt be something that happens on the side or afterwardssomething that slows you down. It should simply be part of how we build these systems in the first place: ethics by design.

I sometimes feel ethics is the wrong word. What were saying is that machine-learning researchers and engineers need to be trained to think through the implications of what theyre building, whether theyre doing fundamental research like designing a new reinforcement-learning algorithm or something more practical like developing a health-care application. If their work finds its way into real-world products and services, what might that look like? What kinds of issues might it raise?

Some of this has started already. We are working with some early-career AI researchers, talking to them about how to bring this way of thinking to their work. Its a bit of an experiment, to see what happens. But even NeurIPS [a leading AI conference] now asks researchers to include a statement at the end of their papers outlining potential societal impacts of their work.

Youve said that we need people with technical expertise at all levels of AI design and use. Why is that?

Im not saying that technical expertise is the be-all and end-all of ethics, but its a perspective that needs to be represented. And I dont want to sound like Im saying all the responsibility is on researchers, because a lot of the important decisions about how AI gets used are made further up the chain, by industry or by governments.

But I worry that the people who are making those decisions dont always fully understand the ways it might go wrong. So you need to involve people with technical expertise. Our intuitions about what AI can and cant do are not very reliable.

What you need at all levels of AI development are people who really understand the details of machine learning to work with people who really understand ethics. Interdisciplinary collaboration is hard, however. People with different areas of expertise often talk about things in different ways. What a machine-learning researcher means by privacy may be very different from what a lawyer means by privacy, and you can end up with people talking past each other. Thats why its important for these different groups to get used to working together.

Youre pushing for a pretty big institutional and cultural overhaul. What makes you think people will want to do this rather than set up ethics boards or oversight committeeswhich always make me sigh a bit because they tend to be toothless?

Yeah, I also sigh. But I think this crisis is forcing people to see the importance of practical solutions. Maybe instead of saying, Oh, lets have this oversight board and that oversight board, people will be saying, We need to get this done, and we need to get it done properly.

Visit link:
If AI is going to help us in a crisis, we need a new kind of ethics - MIT Technology Review

Information Security Forum explores the risks and challenges of open source software – Security Magazine

Information Security Forum explores the risks and challenges of open source software | 2020-06-25 | Security Magazine This website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more. This Website Uses CookiesBy closing this message or continuing to use our site, you agree to our cookie policy. Learn MoreThis website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more.

Originally posted here:
Information Security Forum explores the risks and challenges of open source software - Security Magazine

New differential privacy platform co-developed with Harvard’s OpenDP unlocks data while safeguarding privacy – Microsoft on the Issues – Microsoft

Data not only drives our modern world; it also bears enormous potential. Data is necessary to shape creative solutions to critical challenges including climate change, terrorism, income and racial inequality, and COVID-19. The concern is that the deeper you dig into the data, the more likely that sensitive personal information will be revealed.

To overcome this, we have developed and released a first-of-its-kind open source platform for differential privacy. This technology, pioneered by researchers at Microsoft in a collaboration with the OpenDP Initiative led by Harvard, allows researchers to preserve privacy while fully analyzing datasets. As a part of this effort, we are granting a royalty-free license under Microsofts differential privacy patents to the world through OpenDP, encouraging widespread use of the platform, and allowing anyone to begin utilizing the platform to make their datasets widely available to others around the world.

Cynthia Dwork, Gordon McKay professor of CS at Harvard and Distinguished Scientist at Microsoft said, Differential privacy, the heart of todays landmark milestone, was invented at Microsoft Research a mere 15 years ago. In the life cycle of transformative research, the field is still young. I am excited to see what this platform will make possible.

Differential privacy does this via a complex mathematical framework that utilizes two mechanisms to protect personally identifiable or confidential information within datasets:

Through these mechanisms, differential privacy protects personally identifiable information by preventing it from appearing in data analysis altogether. It further masks the contribution of an individual, essentially rendering it impossible to infer any information specific to any particular person, including whether the dataset utilized that individuals information at all. As a result, outputs from data computations, including analytics and machine learning, do not reveal private information from the underlying data, which opens the door for researchers to harness and share massive quantities of data in a manner and scale never seen before.

We need privacy enhancing technologies to earn and maintain trust as we use data.Creating an open source platform for differential privacy, with contributions from developers and researchers from organizations around the world, will be essential in maturing this important technology and enabling its widespread use, said Julie Brill, Chief Privacy Officer, Corporate Vice President, and Deputy General Counsel of Global Privacy and Regulatory Affairs.

Over the past year, Microsoft and Harvard worked to build an open solution that utilizes differential privacy to keep data private while empowering researchers across disciplines to gain insights that possess the potential to rapidly advance human knowledge.

Our partnership with Microsoft in developing open source software and in spanning the industry-academia divide has been tremendously productive. The software for differential privacy we are developing together will enable governments, private companies and other organizations to safely share data with academics seeking to create public good, protect individual privacy and ensure statistical validity, said Gary King, Weatherhead University Professor, and Director Institute for Quantitative Social Science, Harvard University.

Because the platform is open source, experts can directly validate the implementation, while researchers and others working within an area can collaborate on projects and co-develop simultaneously. The result is that we will be able to iterate more rapidly to mature the technology. Only through collaboration at a massive scale will we be able to combine previously unconnected or even unrelated datasets into extensive inventories that can be analyzed by AI to further unlock the power of data.

Large and open datasets possess an unimaginable amount of potential. The differential privacy platform paves the way for us to contribute, collaborate and harness this data, and we need your help to grow and analyze the worlds collective data repositories. The resulting insights will have an enormous and lasting impact and will open new avenues of research that allow us to develop creative solutions for some of the most pressing problems we currently face.

The differential privacy platform and its algorithms are now available on GitHub for developers, researchers, academics and companies worldwide to use for testing, building and support. We welcome and look forward to the feedback in response to this historic project.

Tags: AI, artificial intelligence, data privacy, Data Protection, Open Data, Privacy

See the rest here:
New differential privacy platform co-developed with Harvard's OpenDP unlocks data while safeguarding privacy - Microsoft on the Issues - Microsoft