Chinese nationals sanctioned and charged with laundering over $100 million in cryptocurrency from hacked exchange – Lexology

On March 2, the U.S. Treasury Departments Office of Foreign Assets Control (OFAC) announced sanctions pursuant to Executive Orders 13694, 13757, and 13722 against two Chinese nationals for allegedly laundering over $100 million in stolen cryptocurrency connected to a North Korean state-sponsored cyber group that hacked cryptocurrency exchanges in 2018. According to OFAC, the two individuals materially assisted, sponsored, or provided financial, material, or technological support for, or goods or services to or in support of, a malicious cyber-enabled activity or in support of the North Korean cyber group, which was designated by OFAC last September (covered by InfoBytes here). OFAC stated that it closely coordinated its action with the U.S. Attorneys Office for the District of Columbia and the Internal Revenue Services Criminal Investigation Division. As a result of the sanctions, all property and interests in property of these individuals that are in the United States or in the possession or control of U.S. persons must be blocked and reported to OFAC. OFAC further noted that its regulations generally prohibit all dealings by U.S. persons or within the United States (including transactions transiting the United States) that involve any property or interests in property of blocked or designated persons, and warned foreign financial institutions that knowingly facilitating significant transactions or providing significant financial services to the designated individuals may subject them to U.S. correspondent account or payable-through sanctions.

On the same day, the DOJ unsealed a two-count indictment against the two individuals, charging them with money laundering conspiracy and operating an unlicensed money transmitting business. The indictment claims that the individuals converted virtual currency traceable to the hack of a cryptocurrency exchange into fiat currency or prepaid Apple iTunes gift cards through accounts in various exchanges linked to Chinese banks and then transferred the currency or gift cards to customers for a fee. According to the indictment, neither individual was registered as a money transmitting business with the Financial Crimes Enforcement Network, which is a federal felony offense. The complaint seeks forfeiture of 113 virtual currency accounts belonging to the individuals.

Go here to see the original:
Chinese nationals sanctioned and charged with laundering over $100 million in cryptocurrency from hacked exchange - Lexology

Will We See Real Surveillance Reform This Week? – Reason

By the end of the week, Congress is supposed to decide whether it will renew some federal surveillance regulations, reform them, or let them expire.Many legislators would probably prefer either to kick the can down the road with another temporary renewal or to pass a modest set of reforms. But several members of Congress are opposed to ketting the status quo continueenough members, in fact, that we may well see reductions in the feds' power to secretly collect data about Americans without our knowledge, as well as more oversight over the secretive Foreign Intelligence Surveillance Act (FISA) Court.

The USA Freedom Act expires on Sunday. Passed after Edward Snowden exposed the ways the National Security Agency (NSA) was secretly collecting telephone and internet metadata of millions of Americans, the act both retroactively authorized the data collection and added some stricter rules to the process. Privacy and civil rightsfocused lawmakers and activists have been trying since then to rein in domestic surveillance even further. Sen. Rand Paul (RKy.) has been using his positive relationship with President Donald Trumpand the president's anger at the surveillance of his campaign, which ultimately led to a failed impeachment attemptto push for reforms.

The Hill reported on Sunday that Paul is, as he has been in the past, the loudest voice stopping Congress from quietly keeping things the way they are:

Paul says he won't support a short-term extension and appeared skeptical that he would back a larger deal that paired a USA Freedom extension with reforms to FISA, though he added that he could support some of the surveillance reforms if they get standalone votes, as amendments, for example.

He's also pushing for an amendment vote to prohibit FISA warrants from being used against American citizens and to prohibit information obtained in the FISA courts from being used against a U.S. citizen in domestic courts.

"I'm not for any extension. I'm for fixing it.I'll vote no on any extension," Paul said.

He's not alone among Republicans in the Senate, and he's got plenty of support from Democrats in the House as well, to require that there be changes. Rep. Doug Collins (RGa.) went on Fox Business yesterday to say that there weren't enough votes in the Democratic-controlled Congress to reauthorize the USA Freedom Act unless there were reforms.

Reform-minded members of Congress aren't focused entirely on the same reforms. The Democrats want to make sure that the records collecting program is officially dead. (NSA has unofficially stopped doing it, but the authorization still exists.) Paul and some other Republicans are using the problemswith the warrants used to wiretap former Trump aide Carter Page to call for more independent oversight to review and advise the FISA court on warrants. Meanwhile, Attorney General William Barr and Senate Majority Leader Mitch McConnell (RKy.) prefer renewal without changes.

Nothing in these reforms is likely to have prevented what had happened with Page, since it's not the USA Freedom Act's authorities that were used to snoop on him. And based on the angry reaction of the FISA Court's judges when they found out the FBI had misled them in parts of the warrant applicationand their decision to call for an independent reviewerit's not clear additional oversight of the court itself would have stopped what happened with Page. The problems seemed to have originated from within the FBI itself.

But this is probably the only way to get Trump to care about restraining the use of secret surveillance on the rest of us. That is surely why Paul is hammering on about what happened to Page and Trump.

Paul's proposed reforms are probably a bridge too far to actually pass, but it's an admirable effort. Paul seems unlikely to be able to convince Congress to eliminate domestic FISA warrants entirely. But just as the USA Freedom Act was a compromise reform forced in part due to Paul's stubborn refusal to shut up about Americans' rights after Snowden's reveal, his prominent status in Trump World will guarantee that at least the broadest reforms will be considered.

But will they actually be debated? That's not so clear. There was already an aborted effort to attach reauthorization to a coronavirus emergency bill last week. With a deadline looming, there's sure to be an effort to roll reforms of some sort into other must-pass legislation. It's just not clear as yet how far those reforms will go.

Read more:
Will We See Real Surveillance Reform This Week? - Reason

Opinion: Berkeley reaches out to Puerto Rico, part of its tradition of helping others – Berkeleyside

Buffeted by hurricanes, earthquakes, a hostile White House and decades of exploitation and ecological destruction, many view Puerto Rico as an ongoing disaster. Some, however, see Puerto Ricos plight as an opportunity. Tired of waiting for solutions from the top down, a nascent Puerto Rican grassroots movement supported by widespread social unrest is attempting to change its society and economy from the bottom up and Berkeley is lending a helping hand.

Instead of waiting for disaster assistance that never comes, this new generation of Puerto Ricans is looking for ways to work together and build their own independent, sustainable future. Hurricane Maria was the catalyst that ignited the cooperative grassroots movement. Maria changed everything island residents constantly recount.

The devastating storm showed clearly that the emperor had no clothes, and that Puerto Ricans were on their own with regard to extreme vulnerability to climate change, food insecurity (90% of its food is imported) and economic recovery.

Realizing they could not wait for government action, Puerto Ricans reacted to events by creating food kitchens, emergency search groups and shelters. These actions helped raise consciousness about the strength and need for grassroots community action. Many of these operations became ongoing activities and many people have chosen to stay on the island and contribute to the back to the land movement to produce more healthy food locally.

Berkeley residents have been involved in supporting these grassroots community groups, just as they have been throughout the decades on trips to Cuba, South Africa, and Vietnam. Green Cities Fund, established in 2005 by myself and my journalist wife TT Nhu (known as Nhu) is supporting projects in the new Puerto Rico.

Soon (when the coronavirus subsides) a team of chef graduates of Chez Panisse including Dominica Saloman and Melissa Fernandez, and restaurateur, chef, winemaker and author Narsai David will join organic Farmer Al Courchesne, owner of Frog Hollow Farm (a major Chez Panisse supplier), and others on a solidarity tour of Puerto Rico in support of its local organic sustainable food movement Local street artist Anthony Holdsworth, whose son is building microgrids in Puerto Rico to enable farms and villages to become energy independent, plans to join the group to record the journey. A similar tour to Cuba in 2012 led to innovative improvements in its cuisine.

These people-to-people efforts have long been part of The Republic of Berkeleys DNA as its citizens and City Council have a long tradition of taking political action on matters far beyond Berkeleys borders which affect the nation and the world. The Free Speech movement began in Berkeley and Berkeley resident Bob Baldock was one of the few U.S. citizens to participate in the Cuban Revolution as a combatant in Fidel Castros unit based in the Sierra Maestra in 1958. The man who released the Pentagon Papers, Daniel Ellsberg, and, filmmaker Judith Ehrlich, whose Oscar-nominated documentary The Most Dangerous Man in America inspired Edward Snowden, are also part of this Berkeley tradition.

In 1972, Nhu and I assisted UNICEFs expansion into Laos, Cambodia and North Vietnam and, in 1975, when thousands of Vietnamese orphans arrived in the U.S. at the end of the war, Nhu and her friends discovered that many were not orphans and had families searching for them. This resulted in a class-action lawsuit against the U.S. government and adoption agencies, which ultimately resulted in many childrens return to their Vietnamese families in the United States. Daughter from Danang, a documentary on one of the orphans by Berkeley filmmaker Gail Dolgin was voted best documentary at the 2002 Sundance Film Festival and nominated for an Oscar.

Our work has also extended to Afghanistan, where we established Parwaz, the first Afghan-run microlending organization. Much earlier, in 1961, I had the privilege of helping to train, at U.C. Berkeley, the first group of Peace Corps volunteers and, in 1966 I worked with plastic and reconstructive surgeon Arthur Barsky, a veteran of the Abraham Lincoln Brigade in Spain, to establish a hospital in Vietnam to treat war-injured children. Nhu and I recently visited the hospital on the 50th anniversary of its official founding (having operated for several years in a Saigon apartment house liberated from the American embassy). It was a journalist, Martha Gellhorn, who alerted me to the plight of children in Vietnam, and an article in the New York Times which brought our attention to the new movement in Puerto Rico.

Berkeley resident Eric Leenson, co-founder of Progressive Assets Management the first socially responsible investment fund, is spearheading Green Cities Fund work in Puerto Rico. In 2019, Eric helped organize the first national gathering of the new grassroots cooperative organizations. Over 240 people from 130 organizations, 25 of whom were non-island experts from seven regional countries, attended in a truly multigenerational environment to address a variety of key topics including agroecology, sustainable tourism, and the construction of a stronger cooperative movement.

The Christopher Reynolds Foundation funded the gathering and also donated an Oggun, a small, inexpensive, easy-to-fix tractor suitable for small organic farms. The design for the tractor is open source, most of the parts can be manufactured locally, and it can be assembled in a day by a mechanic. It will soon be manufactured in Puerto Rico.

California is also contributing to survival and sustainability in Puerto Rico through innovative earthquake-proof superadobe housing developed by the California Institute of Earth Architecture (CalEarth) Much of the housing in Puerto Rico is substandard and unable to withstand hurricanes and earthquakes. The well-insulated superadobe earthquake-proof construction is easily built from earth and other local materials. Heres a local TV story on project.

The Green Cities group soon visiting Puerto Rico will meet with innovative Puerto Rican chefs who are substituting local foods into the islands cuisine. The group will also visit the farms producing these foods. They will meet with the organizers of the grassroots movement and also visit Plenitud, an organization devoted to the grassroots rebuilding of communities and making them sustainable through agriculture, home and infrastructure rebuilding, community organization and energy independence. Hopefully, this will be the beginning of an ongoing program of exchange and assistance.

Continue reading here:
Opinion: Berkeley reaches out to Puerto Rico, part of its tradition of helping others - Berkeleyside

Chips that pass in the night: How risky is RISC-V to Arm, Intel and the others? Very – The Register

Column How well does Intel sleep? It's just rounded off a record year with a record quarter, turning silicon into greenbacks more efficiently than ever, redeeming recent wobbles in the data centre market and missteps in fabrication with double-digit growth.

The company should be slumbering with all the unworried ease of Scrooge McDuck on a mattress stuffed with thousand-dollar bills. Yet the wake-up packet of unease should be pinging its management port with some insistence right now.

Intel remains a one-trick pony, entirely dependent on the x86 ISA. It has no game in GPUs, it is tuning out of its 5G interests, it has long handed over handsets to Arm. It has memory, it has Wi-Fi, it has wired networking, but compared to the cash cows of edge and central x86, these are barely cash coypu.

One barbarian is at the gates with a refurbished siege engine. AMD has finally got its architectural, process node and marketing acts together and is making up for lost time while Intel is still recalibrating from 10nm disappointment. Yet this is familiar turf for Intel, which remains a very formidable machine with enormous resources and flexibility. When AMD still had its own chip fabs a decade or so ago and was having its own process woes, it suffered: Intel is making record profits. It knows how to sell chips on its own turf. It'll have some bumps getting out of 10nm and the next couple of years may not be quite such record-breakers, but x86 remains its to lose.

The smaller, nimbler and more exciting competitor is going to be harder to defend against in the long term. As it prepares to celebrate the 10th year since its inception, RISC-V is showing the most dangerous trait in any competitor, the ability to redefine the ecosystem.

RISC-V has its conceptual roots in 1980s Berkeley, in part as a direct reaction to the trend towards increasing CPU complexity exemplified by Intel's development of the 8080 via the 8086 into the 80386 during the same epoch. That added instruction set features in silicon as Moore's Law made more transistors affordable; RISC went the other way, keeping the core features small and using Moore's Law to speed them up.

RISC-V, as a collaborative foundation of semiconductor companies, was formed in 2015.

As an architecture, it came into being in 2010, again at Berkeley, in the Parallel Computing Laboratory funded oh, the irony by Microsoft and Intel. It absorbed all the lessons of the previous 30 years, not just architecturally but in how the industry itself worked. The RISC idea saw some success among traditional processor companies, but the big winner was the British upstart Arm technically clever and with what proved a killer processing power per watt advantage, but which really shone because it was licensed, not made. Manufacturers bought the design, not the chip, and mixed it in with their own circuitry. Couldn't do that with Intel.

RISC-V takes that further. The obvious advantage over Arm is that RISC-V's instruction set architecture is open source; you can just use it as you wish without paying royalties. But like open-source software, the fact its free is misleading. You can buy a feature phone with an Arm-based chip in it for a tenner: whatever pennies of that go in CPU licensing don't matter. What RISC-V has that Arm doesn't is extensibility. If you need to add features in the instruction set, go ahead. If you need to tune for very low power or very high throughput, you can.

Even that wouldn't be much of an advantage by itself. Designing architectural innovations in silicon is like architecture in brick; easy enough on paper, but until you build the bugger you can't be sure it won't fall down. The process of checking a design for reliable operation is called verification, and when you have a billion-transistor-class CPU with near-infinite permutations of state you only get to verify as much as you can afford: nowhere close to the whole thing. ARM, Intel, AMD, IBM et al spend a lot of time and money in verification so they can sell a complete design with a plausible "Trust us" stamp on it. If you're building your own RISC-V design and can't afford equivalent verification, how do you trust yourself?

The good news for the RISC-V ecosystem is that verification tools are appearing that automate the process as far as possible. Open source means the majority of your CPU design has been very well tested; your innovations live in an understood and exercised environment, just as open-source software has produced an exceptionally stable yet extensible environment. Conversely, the "Trust us" stamp is looking quite tarnished. Heart Bleed, Spectre and the very latest Intel Management Engine vulnerability are all either signs of verification failure or, even worse, problems that came out during verification but were too expensive to fix and too dangerous to admit. That's why buildings fall down.

So, at the same time as the monolithic approach to CPU design is looking the most vulnerable, the RISC-V approach is getting the same momentum as open source software did in the Noughties. It's in supercomputers. It's in IoT. Samsung is making it. The tools are appearing, the people are learning it, it's becoming the right answer to a lot of questions.

To be fair, Intel shouldn't be losing as much sleep over RISC-V as Arm, which now runs the risk of being another of SoftBank's brilliantly timed investments. Yet the openness and expanding ecosystem of RISC-V has the potential as no other competitor does of restricting Intel's home-turf advantage, much as Microsoft lost the web and mobile to open-source software based on common architectural ideas.

It doesn't matter how good a dinosaur you are if your environment changes. That's what RISC-V represents.

Sponsored: Quit your addiction to storage

More here:
Chips that pass in the night: How risky is RISC-V to Arm, Intel and the others? Very - The Register

GitHub’s Plan to Freeze Your Code for Thousands of Years – thenewstack.io

Recently I discovered some computer code Id written will outlive me for many centuries. A copy of it has been stored in a chilly cave in the Arctic Circle.

Its part of a fascinating project by GitHub, the 2020 Arctic Vault program, which brings modern technologies into a surprisingly primitive environment to deliver an unexpected honor for a wide swath of the 100 million code repositories currently hosted on GitHubs servers, by archiving of all this material in perpetuity in an exotic archipelago in Norway, near the northernmost town in the world.

GitHubs vice president of special projects, Thomas Dohmke, tells news.com.au that GitHub is uniquely positioned for the archival, and has the responsibility to protect and preserve the collaborative work of millions of developers around the world. On its webpage for the project, GitHub strikes a similarly grand tone, calling open source software a hidden cornerstone of modern civilization, and the shared heritage of all humanity.

We will protect this priceless knowledge by storing multiple copies, on an ongoing basis, across various data formats and locations, he said.

On a visit, GitHubs CEO Nat Friedman described the storage location, a decommissioned coal mine, as more mine-y and rustic and raw-hole-in-the-rock than I thought it would be, according to a recent article in Bloomberg. The news service goes on to note that, to Friedman, its a natural next step. Open source software, in his view, is one of the great achievements of our species, up there with the masterpieces of literature and fine art.

And its not the only priceless knowledge being stored in this remote location. According to Bloomberg,the other shelves in the mine include Vatican archives, Italian movies, Brazilian land registry records, and the recipe for a certain burger chains special sauce.

But whats the rationale for this massive effort? The projects page cites the threat of code being abandoned, forgotten, or lost. Worse yet, how would the code be otherwise saved in case of a global catastrophe?

There exists a range of possible futures in which working modern computers exist, but their software has largely been lost to bit rot. The GitHub Archive Program will include much longer-term media to address the risk of data loss over time, the site notes.

Of course, the code repository services has also given some thought to how the future might use our code. There is a long history of lost technologies from which the world would have benefited, as well as abandoned technologies which found unexpected new uses, explains the project web page. It is easy to envision a future in which todays software is seen as a quaint and long-forgotten irrelevancy until an unexpected need for it arises.

Future historians might see the significance in our age of open source ubiquity, volunteer communities, and Moores Law.

Which code blocks make the cut? According to GitHub: The archive will include every repo with any commits between the announcement at GitHub Universe on Nov. 13 and 02/02/2020, every repo with at least 1 star and any commits from the year before the snapshot (02/02/2019 02/02/2020), and every repo with at least 250 stars. Plus, gh-pages for any repository that meets the aforementioned criteria.

The Norwegian data-storing company Piql, whose custom film and archiving technologies will allow the project to store terabytes of data for over 1,000 years, brags that code is now headed into the gold standard of long-term data storage.

But besides offering vault storage services, Piql also offers a unique form of data digitization. Piql is storing the code on hundreds of reels of film made from polyester and silver halide. Bloomberg points out theyre coated with an iron oxide powder for added Armageddon-resistance. Each of its microfilm-like frames holds over 8.8 million pixels. Piql explains that its method involves converting 1s and 0s into QR code. No electricity or other human intervention is needed as the climatic conditions in the Arctic are ideal for long-term archival of film, explained a Piql web page.

By using a self-contained and technology-independent storage medium, future generations will be able to read back the information, according to Piql. The project also includes instructions on how to unpack and read the code.

Bloomberg even notes that theres a treaty in place which keeps Svalbard neutral in times of war. Because its all stored on offline film reels, GitHub doesnt have to worry about power outages. An added layer of security comes from its remote location. One GitHub video points out that the Svalbard archipelago is home to the northern-most town in the world as well as thousands of polar bears. The videos description notes that though its called the GitHub Arctic Code Vault, its actually closer to the North Pole than the Arctic Circle.

Its been fun to watch the reactions to GitHubs video. The future will be amazed by my JavaScript Calculator, joked one comment.

Others couldnt resist commenting on the Arctic location. (Now my code can freeze before it even gets run) Another naysayer even quipped, When your code is so bad that you need to bury it under the permafrost

GitHubs FAQ says the company plans to re-evaluate the program (and its storage medium) every five years at which point itll decide whether to take another snapshot.

And if youre curious what its like in a Svalbard mine, a nearby coal mine is offering tours. Most of Svalbards old Norwegian and Russian coal mines have shut down, explains Bloomberg, so locals have rebranded their vast acres of permafrost as an attraction to scientists, doomsday preppers, and scientist doomsday preppers.

Link:
GitHub's Plan to Freeze Your Code for Thousands of Years - thenewstack.io

Washington increases pressure on Beijing over Chinese media – World Socialist Web Site

Washington increases pressure on Beijing over Chinese media By Ben McGrath 10 March 2020

The Trump administration stepped up its punitive measures against Chinese media in the US after Beijing expelled three Wall Street Journal (WSJ) reporters last month. It has placed a limit on the number of Chinese citizens eligible to work at five of Beijings news outlets. The State Department announced on March 2 that the five agencies will be required to reduce the total number of Chinese nationals from 160 to 100 by March 13.

The five media outlets include Chinas official news agency, Xinhua, China Radio International, China Global Television Network, China Daily Distribution Corporation and Hai Tian Development USA, which print and distribute the newspapers China Daily and Peoples Daily respectively.

In response to the latest restrictions, Chinese Foreign Ministry spokeswoman Hua Chunying suggested Beijing will take further measures. She posted on Twitter, Reciprocity? 29 US media agencies in China VS 9 Chinese ones in the US. Multiple-entry to China VS Single-entry to the US. 21 Chinese journalists denied visas since last year. Now the US kicked off the game, lets play.

Secretary of State Mike Pompeo justified the decision saying, For years, the government of the Peoples Republic of China (PRC) has imposed increasingly harsh surveillance, harassment, and intimidation against American and other foreign journalists operating in China. President Trump has made clear that Beijings restrictions on foreign journalists are misguided. The US government has long welcomed foreign journalists, including PRC journalists, to work freely and without threat of reprisal.

Beijing announced on February 19 that it would expel three journalists after accusing the WSJ of denigrating Chinas efforts to deal with the Covid-19 coronavirus outbreak. None of the three journalists had been involved in writing an opinion piece published February 3 that provided the impetus for the expulsions, but all had been involved in criticizing the treatment of Uighurs in Chinas Xinjiang Province.

The day before Beijings decision, Washington had designated the five media outlets currently at the center of the conflict as foreign diplomatic missions. As a result, they are required to declare all of their property holdings and seek approval for acquisition of new property. Their employees are forced to register with the State Department and all five agencies are subject to greater monitoring by the US government.

Washington is no defender of a free press. Trump and his allies have regularly accused the media of being the enemy of the people while encouraging violence against journalists. Trump even praised the 2017 assault of Guardian reporter Ben Jacobs by then-candidate Republican Greg Gianforte, who slammed Jacobs to the ground. The reporter was covering Gianfortes campaign for the US House of Representatives. Gianforte subsequently won the election but was also convicted of assault.

So volatile has the situation become for reporters covering the US presidential election that the Committee to Protect Journalists (CPJ) is issuing safety kits to journalists covering the election for the first time in the CPJs forty-year history. The kits offer basic safety information on physical, digital and psychological safety resources and tools.

The Trump administrations most vicious attack on freedom of the press is the persecution of Julian Assange and Chelsea Manning, the goal of which is to intimidate journalists and whistleblowers into remaining silent about Washingtons crimes. This began under President Barack Obama and the Democrats, which support the punitive measures against Assange and Manning no less than the Republicans.

Washington is demanding Assanges extradition from the United Kingdom, where the Australian journalist is being subjected to psychological torture in Belmarsh prison. Assange, along with whistleblower Manning, exposed US war crimes and other offenses and now could face the death penalty if sent to the US. Manning has been vindictively held behind bars for refusing to give false testimony in Assanges case.

Washingtons decision last week to further restrict Chinese media in the US is a continuation of its anti-China policy that has been prosecuted by both the Republicans and the Democrats. As with the Obama administrations pivot to Asia, the Trump government is increasingly moving the US onto a war footing with China, applying military and economic pressure to Beijing in an attempt to force the Stalinist regime to bow to US demands.

However, such an agenda finds no mass support among American workers and youth after nearly 30 years of unending war. Therefore, Washington is using empty phrases about free press and democracy in order to justify its war preparations. Publications like the WSJ and the New York Times have contributed to this by demonizing China in support of so-called human rights. They have even claimed that Chinese censorship contributed to the Covid-19 outbreak, stating that it never would have happened in the supposedly free and democratic West.

In a January 29 article in the New York Times, Nicholas Kristof, an ardent supporter of neocolonial campaigns waged in the name of human rights, denounced Chinese President Xi Jinping, in a comment headlined as Coronavirus spreads, the world pays for Chinas dictatorship.

Criticizing Beijings initial cover-up of the novel coronavirus outbreak, Kristof wrote, One reason for the early cover-up is that Xis China has systematically gutted institutions like journalism, social media, nongovernmental organizations, the legal profession and others that might provide accountability.

The claims that the USs free press would have somehow stopped the crisis is belied by the fact the US media has engaged in countless cover-ups leading to complete disasters including the Iraq War and destruction of broad regions in the Middle East and North Africa. The US media is now aiding Washington in deflecting fears and anger over the virus onto China as it becomes increasingly clear that the US ruling class is not only totally unprepared but is entirely indifferent to the fate of broad masses of people.

2019 has been a year of mass social upheaval. We need you to help the WSWS and ICFI make 2020 the year of international socialist revival. We must expand our work and our influence in the international working class. If you agree, donate today. Thank you.

Read more:
Washington increases pressure on Beijing over Chinese media - World Socialist Web Site

Assange trial rehearsal? Hung jury results in mistrial for former CIA tech accused of handing Vault 7 docs to WikiLeaks – RT

Federal prosecutors were unable to convince a jury on any of the spying-related charges against an ex-CIA engineer accused of stealing reams of classified material in what may be a dry run for the case against Julian Assange.

In a significant blow to prosecutors on Monday, jurors failed to come to a verdict on eight central counts against former CIA software engineer Joshua Schulte, who was charged for stealing thousands of pages of classified information on the agencys secret hacking tools and passing them to WikiLeaks what later became its Vault7 release, the largest breach of classified material in CIA history.

While Schulte was found guilty of contempt of court and making false statements to investigators, a hung jury on the remaining eight charges including illegal gathering and transmission of national defense information prompted District Judge Paul Crotty to order a mistrial and dismiss the jurors on the case, who had deemed themselves extremely deadlocked in a note to the judge.

The split verdict came after nearly a full week of messy deliberations, which saw one juror removed for researching the facts of the case against Crottys orders. She was never replaced, however, leaving a short-handed panel to deliver a final decision.

The former technician left his job in the CIAs Langley headquarters in 2016 and was charged some two years later for his alleged role in the Vault 7 leak. But prosecutors had difficulty tying Schulte to the disclosure throughout his four-week trial, with jurors often mystified by a complicated maze of technical evidence.

The case may offer parallels to that of WikiLeaks co-founder Julian Assange, who faces 17 charges under the World War I-era Espionage Act and up to 175 years in prison over his role in the publication of the Iraq and Afghan war logs in 2010. Assange is accused of helping leaker Chelsea Manning (then known as Bradley)hack into military computers to obtain classified material, but if extradited from the UK to stand trial in an American courtroom, prosecutors would likely produce similar technical forensics to prove his involvement, precisely what the government was unable to do in Schultes case.

Arguing that the CIAs computer network had widely known vulnerabilities, including poor password protections, Schultes defense insisted prosecutors had failed to prove his role in the breach. They noted it was possible another actor gained access to his work station, pointing to another CIA employee identified only as Michael as a potential culprit.

The CIA later placed the employee on administrative leave for refusing to cooperate with the investigation, which suggested the government had doubt about the case against Mr. Schulte, defense attorney Sabrina Shroff said in her closing argument on Monday.

Prosecutors are likely to demand a retrial for Schulte, and he still stands accused of possessing child pornography, allegedly stored on devices found during a search of his home. He will be tried separately on those charges, facing a total of 15 counts.

Like this story? Share it with a friend!

Here is the original post:
Assange trial rehearsal? Hung jury results in mistrial for former CIA tech accused of handing Vault 7 docs to WikiLeaks - RT

4 ways to fine-tune your AI and machine learning deployments – TechRepublic

Life cycle management of artificial intelligence and machine learning initiatives is vital in order to rapidly deploy projects with up-to-date and relevant data.

Image: Chinnawat Ngamsom, Getty Images/iStockphoto

An institutional finance company wanted to improve time to market on the artificial intelligence (AI) and machine learning (ML) applications it was deploying. The goal was to reduce time to delivery on AI and ML applications, which had been taking 12 to 18 months to develop. The long lead times jeopardized the company's ability to meet its time-to-market goals in areas of operational efficiency, compliance, risk management, and business intelligence.

SEE: Prescriptive analytics: An insider's guide (free PDF) (TechRepublic)

After adopting a life-cycle management software for its AI and ML application development and deployment, the company was able to reduce its AI and ML application time to market to days, and in some cases, to hours. The process improvement enabled corporate data scientists to spend 90% of their time on data model development, instead of 80% of time on the resolution of technical challenges resulting from unwieldy deployment processes.

This is important because the longer you extend your big data and AI and ML modeling, development, and delivery processes, the greater the risk that you end up with modeling, data, and applications that are already out of date by the time they are ready to be implemented. In the compliance area alone, this creates risk and exposure.

"Three big problems enterprises face as they roll out artificial intelligence and machine learning projects is the inability to rapidly deploy projects, data performance decay, and compliance-related liability and losses," said Stu Bailey, chief technical officer of ModelOP, which provides software that deploys, monitors, and governs data science AI and ML models.

SEE:The top 10 languages for machine learning hosted on GitHub (free PDF)(TechRepublic)

Bailey believes that most problems arise out of a lack of ownership and collaborationbetween data science, IT, and business teams when it comes to getting data models into production in a timely manner. In turn, these delays adversely affect profitability and time-to-business insight.

"Another reason that organizations have difficulty managing the life cycle of their data models is that there are many different methods and tools today for producing data science and machine language models, but no standards for how they're deployed and managed," Bailey said.

The management of big data, AI, and ML life cycles can be prodigious tasks that go beyond having software and automation that does some of the "heavy lifting." Also, many organizations lack policies and procedures for these tasks. In this environment, data can rapidly become dated, application logic and business conditions can change, and new behaviors that humans must teach to machine language applications can become neglected.

SEE:Telemedicine, AI, and deep learning are revolutionizing healthcare (free PDF)(TechRepublic)

How can organizations ensure that the time and talent they put into their big data, AI, and ML applications remain relevant?

Most organizations acknowledge that collaboration between data science, IT, and end users is important, but they don't necessarily follow through. Effective collaboration between departments depends on clearly articulated policies and procedures that everyone adheres to in the areas of data preparation, compliance, speed to market, and learning for ML.

Companies often fail to establish regular intervals for updating logic and data for big data, AI, and ML applications in the field. The learning update cycle should be continuous--it's the only way you can assure concurrency between your algorithms and the world in which they operate.

Like their transaction system counterparts, there will come a time when some AI and ML applications will have seen their day. This is the end of their life cycles, and the appropriate thing to do is retire them.

If you can automate some of your life cycle maintenance functions for big data, AI, and ML, do so. Automation software can automate handoffs between data science IT and production. It makes the process of deployment that much easier.

Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays

Originally posted here:
4 ways to fine-tune your AI and machine learning deployments - TechRepublic

AI-powered honeypots: Machine learning may help improve intrusion detection – The Daily Swig

John Leyden09 March 2020 at 15:50 UTC Updated: 09 March 2020 at 16:04 UTC

Forget crowdsourcing, heres crooksourcing

Computer scientists in the US are working to apply machine learning techniques in order to develop more effective honeypot-style cyber defenses.

So-called deception technology refers to traps or decoy systems that are strategically placed around networks.

These decoy systems are designed to act as a honeypot so that once an attacker has penetrated a network, they will attempt to attack them setting off security alerts in the process.

Deception technology is not a new concept. Companies including Illusive Networks and Attivo have been working in the field for several years.

Now, however, researchers from the University of Texas at Dallas (UT Dallas) are aiming to take the concept one step further.

The DeepDig (DEcEPtion DIGging) technique plants traps and decoys onto real systems before applying machine learning techniques in order to gain a deeper understanding of attackers behavior.

The technique is designed to use cyber-attacks as free sources of live training data for machine learning-based intrusion detection systems.

Somewhat ironically, the prototype technology enlists attackers as free penetration testers.

Dr Kevin Hamlen, endowed professor of computer science at UT Dallas, explained: Companies like Illusive Networks, Attivo, and many others create network topologies intended to be confusing to adversaries, making it harder for them to find real assets to attack.

The shortcoming of existing approaches, Dr Hamlen, told The Daily Swig is that such deceptions do not learn from attacks.

While the defense remains relatively static, the adversary learns over time how to distinguish honeypots from a real asset, leading to an asymmetric game that the adversary eventually wins with high probability, he said.

In contrast, DeepDig turns real assets into traps that learn from attacks using artificial intelligence and data mining.

Turning real assets into a form of honeypot has numerous advantages, according to Dr Hamlen.

Even the most skilled adversary cannot avoid interacting with the trap because the trap is within the real asset that is the adversary's target, not a separate machine or software process, he said.

This leads to a symmetric game in which the defense continually learns and gets better at stopping even the most stealthy adversaries.

The research which has applications in the field of web security was presented in a paper (PDF) entitled Improving Intrusion Detectors by Crook-Sourcing, at the recent Computer Security Applications Conference in Puerto Rico.

The research was funded by the US federal government. The algorithms and evaluation data developed so far have been publicly released to accompany the research paper.

Its hoped that the research might eventually find its way into commercially available products, but this is still some time off and the technology is still only at the prototype stage.

In practice, companies typically partner with a university that conducted the research theyre interested in to build a full product, a UT Dallas spokesman explained. Dr Hamlens project is not yet at that stage.

RELATED Gold-nuggeting: Machine learning tool simplifies target discovery for pen testers

Read more from the original source:
AI-powered honeypots: Machine learning may help improve intrusion detection - The Daily Swig

If AI’s So Smart, Why Can’t It Grasp Cause and Effect? – WIRED

Heres a troubling fact. A self-driving car hurtling along the highway and weaving through traffic has less understanding of what might cause an accident than a child whos just learning to walk.

A new experiment shows how difficult it is for even the best artificial intelligence systems to grasp rudimentary physics and cause and effect. It also offers a path for building AI systems that can learn why things happen.

The experiment was designed to push beyond just pattern recognition, says Josh Tenenbaum, a professor at MITs Center for Brains Minds & Machines, who who worked on the project with Chuang Gan, a researcher at MIT, and Kexin Yi, a PhD student at Harvard. Big tech companies would love to have systems that can do this kind of thing.

The most popular cutting-edge AI technique, deep learning, has delivered some stunning advances in recent years, fueling excitement about the potential of AI. It involves feeding a large approximation of a neural network copious amounts of training data. Deep-learning algorithms can often spot patterns in data beautifully, enabling impressive feats of image and voice recognition. But they lack other capabilities that are trivial for humans.

To demonstrate the shortcoming, Tenenbaum and his collaborators built a kind of intelligence test for AI systems. It involves showing an AI program a simple virtual world filled with a few moving objects, together with questions and answers about the scene and whats going on. The questions and answers are labeled, similar to how an AI system learns to recognize a cat by being shown hundreds of images labeled cat.

Systems that use advanced machine learning exhibited a big blind spot. Asked a descriptive question such as What color is this object? a cutting-edge AI algorithm will get it right more than 90 percent of the time. But when posed more complex questions about the scene, such as What caused the ball to collide with the cube? or What would have happened if the objects had not collided? the same system answers correctly only about 10 percent of the time.

Supersmart algorithms won't take all the jobs, But they are learning faster than ever, doing everything from medical diagnostics to serving up ads.

David Cox, IBM director of the MIT-IBM Watson AI Lab, which was involved with the work, says understanding causality is fundamentally important for AI. We as humans have the ability to reason about cause and effect, and we need to have AI systems that can do the same.

A lack of causal understanding can have real consequences, too. Industrial robots can increasingly sense nearby objects, in order to grasp or move them. But they don't know that hitting something will cause it to fall over or break unless theyve been specifically programmedand its impossible to predict every possible scenario.

If a robot could reason causally, however, it might be able to avoid problems it hasnt been programmed to understand. The same is true for a self-driving car. It could instinctively know that if a truck were to swerve and hit a barrier, its load could spill onto the road.

Causal reasoning would be useful for just about any AI system. Systems trained on medical information rather than 3-D scenes need to understand the cause of disease and the likely result of possible interventions. Causal reasoning is of growing interest to many prominent figures in AI. All of this is driving towards AI systems that can not only learn but also reason, Cox says.

The test devised by Tenenbaum is important, says Kun Zhang, an assistant professor who works on causal inference and machine learning at Carnegie Mellon University, because it provides a good way to measure causal understanding, albeit in a very limited setting. The development of more-general-purpose AI systems will greatly benefit from methods for causal inference and representation learning, he says.

Excerpt from:
If AI's So Smart, Why Can't It Grasp Cause and Effect? - WIRED