Netherlands investigates innovative privacy technology SSI – ComputerWeekly.com

Dutch research organisation TNO is investigating concrete applications of self-sovereign identity (SSI) technology to make citizens lives easier, and enable organisations to make considerable savings in administrative processes.

SSI offers new ways for citizens to manage their privacy, eliminates the need to log in with passwords, and speeds up transactions over the internet and in real life.

We are investigating how SSI can be made suitable for applications, said Rieks Joosten, senior scientist in business information processes and information security at TNO. Perhaps the most important application is the electronic filling of administrative forms. If you want to apply for a mortgage, you need to gather all sorts of information to submit to the lender. Not only do you often have to fill in the same data repeatedly, you also need authorised documents, from your employer and the bank, for example.

Midway through last year, the Netherlands national ombudsman published a report, Keep it simple, which looking at the red tape that citizens face when doing business with government departments and businesses. The report showed that such processes are often time-consuming and frustrating for citizens.

But it is also costly for the parties who have to validate these forms, said Joosten. We estimate that Dutch organisations spend more than 1bn a year on validation.

Using SSI, this can be done more efficiently and effectively in the future. Behind it lie cryptographic technologies, for instance public-key cryptography, zero-knowledge proofs and often blockchain. These technologies give the user control over which personal data is shared with whom, while the recipient can quickly verify this data electronically.

This enables secure and efficient exchange of digital information, said Joosten. Parties can now get quality data that provably originates from organisations that they trust, and hasnt been changed in transit.

SSI can help companies to comply with European privacy legislation and save considerable costs on administrative processes. For citizens, the system saves a huge amount of time and frustration, and can prevent people from giving up in a complex administrative process and therefore not getting what they are entitled to. Also, they no longer have to log in with usernames and passwords.

Joosten added: You fill in a form because you want to get something, say a parking permit or a mortgage. This form is designed so that the provider can get answers to three questions. One, what do I get from you and what do you get from me? Two, do I value what I get more highly than what I give? And three, is the risk Im taking with this transaction acceptable to me?

This allows the provider to decide whether or not to provide what is requested, he said. SSI adds the ability to electronically annotate the form, allowing the provider to specify which organisations it trusts to provide what data.

The users SSI app can read these annotations and, after obtaining the users consent, gets that data from the users digital wallet and sends it to the providers web server, including electronically verifiable proofs of provenance and integrity. So the provider obtains quality data from a source that he or she trusts, said Joosten.

Several local solutions already exist that do this, he added. In the Netherlands, we have IRMA, in Belgium Its Me, and similar initiatives exist in other countries. They support local SSI markets, have their own infrastructure, their own governance and their own forms of credentials.

It resembles how data networks worked in the early days of the internet. We had local area networks [LANs], each using its own protocol. With the advent of IPv4, it became possible to send data across different LANs, all over the world. We are looking for an SSI network infrastructure that is not owned by a single party, and does for local solutions what IPv4 did for LANs.

Although the Netherlands, Germany and Belgium are leading the way in Europe with the development of SSI research and applications, Joosten sees the necessity of collaboration. Individual parties, large and small, need to contribute to the bigger picture, he said. We not only need technicians, but also visionaries and people with political and business knowledge.

Some of them will contribute to the horizontal SSI infrastructure, others to vertical SSI markets, and still others to make it all work together, so that SSI can grow organically. We work with lots of parties in communities such as the Dutch Blockchain Coalition or Techruption, consortia such as uNLock, programmes such as EBSI/ESSIF, in events such as Odyssey.org or Rebooting Web of Trust, and others.

Within its SSI Lab in Groningen, TNO works on components that could become part of the SSI infrastructure, integrating where possible with components that others are developing. Also, applications are being developed to support SSI marketplaces and for demonstration purposes.

The SSI Lab is not just for TNO, said Joosten. It provides a safe environment for other organisations to experiment with several technologies, allowing them to experience the state of the art and build a business case for themselves. Also, the SSI Lab develops mental models and other stories for the purpose of aligning the currently different and non-interoperable ways in which people think about SSI.

However, many technological and organisational challenges remain to be resolved before citizens, businesses and public authorities can benefit from SSI. We need to understand exactly how different individuals and organisations will use the same technology, and what needs have to be catered for, said Joosten. We must provide assurances regarding the security and integrity of the various user- and business apps, that can be verified at the business level. To find answers, we work together with many other parties.

Since November last year, the eSSIF-Lab has been launched, with European Union funding available for small enterprises and startups that want to build or improve SSI components. The aim is to create multiple open source interoperable SSI components that are actually used, said Joosten. In fact, the SSI Lab is entering Europe in this way.

Read the original post:
Netherlands investigates innovative privacy technology SSI - ComputerWeekly.com

Exclusive: Parler Rejects ‘Hate Speech’ Bans, Will Fix ‘Awkward’ ‘Fighting Words’ Rule – CNSNews.com

A woman opens the Parler app on her phone. (Photo credit: OLIVIER DOULIERY/AFP via Getty Images)

John Matze, CEO of social media company Parler, committed not to ban users for "hate speech,"stated that his company would fix an "awkward" "fighting words" clause in its community guidelines, andcalled the decision by Big Tech companies to censor the America's Frontline Doctors video "ridiculous," in an exclusive interview with CNSNews.com.

"We refuse to ban people on something so arbitrary that it cant be defined," Matze said when asked whether Parler has banned or ever will ban users for "hate speech." "You see these sites trying to enforce these arbitrary rules and you notice that people are getting kicked off for the most random and arbitrary things like misgendering people. It's absurd. So no, we won't be pursuing that policy."

The Parler CEO also commented on the subjective nature of the "Fighting Words or Threats to Harm" portion of the company's community guidelines, which, as of press time,gives as an example "anydirectandverypersonalinsultwiththeintention ofstirringandupsettingthe recipienti.e.,wordsthatwouldleadtoviolenceifyouweretospeakinthatfashioninperson."

"We just hired a chief policy officer who's a real lawyer," Matze said. "She's actually overhauling that specific clause that you brought up because she said it's a really awkward clause to have online....Our goal here is to maximize free speech, maximize online discussion, while maintaining an actual community feel."

Finally, the head of the Twitter alternative addressed the censorship by Big Tech companiesof a video by America's Frontline Doctors in which one doctor posited hydroxychloroquine as a cure for the coronavirus.

"We allow [the video]freely," Matze said. "This person's a doctor, they're making a statement, they're liable for the statement. They could get sued for malpractice, they could lose their job, but they want to say it anyway. That's their right."

"When you see these social media platforms cracking down, it just makes these people feel more disenfranchised. They feel like they have no freedoms, they can't talk about this. They're not even in control of their own health. And that's wrong."

The Parler CEO discussed with CNSNewsa variety of other topics, including the platform's content moderation system, its recent growth from 1 million to 3.3 million users, its plan to implement a "groups" feature, the dropping of an indemnification clause in its user agreement, and the company's plans to combat otherkinds of tech censorship.

Below is a transcript of the interview:

Rob Shimshock: Hello there, Im Rob Shimshock commentary editor for CNSNews.com and today Im joined by John Matze, CEO of the up-and-coming social media company, Parler. Thanks so much for coming on, John.

John Matze: Thank you.

Shimshock: Now, your company Parler has positioned itself as an alternative to Twitter by striving to embrace the culture of free speech that Twitter has left by the wayside, if not actively smothered. Is that a fair characterization?

Matze: Yeah, thats accurate. Basically, a lot of people have come over because there seems to be a lot of ambiguity with their terms of service, to say it lightly. And so, what weve done is weve created a platform where people are not judged by us. They are judged by a jury of their peers and our rules are transparent. They are involved -- you know, they basically are free speech-oriented. Anything that you can say on the street in New York, you can say on Parler and the goal is to create conversation, not to dismantle conversation, to allow debate, conversation in general. And were seeing that people love that concept. Its kind of old-fashioned, but it seems to be very popular.

Shimshock: Great, well I have a couple of questions about the actual terms of services and policies. But first, Id like to know, weve seen Parlers user base explode recently, with site users soaring from one to 1.5 million. The platform does seem to have attracted more right-wing than left-wing folks. And I saw that Parler is offering $20,000 to a high-profile liberal pundit who joins the platform. But speaking more broadly, how will Parler ensure it becomes a true Twitter alternative, that is, a facilitator of debate from perspectives across the political spectrum instead of a conservative echo chamber?

Matze: Well, youve hit a few points. So, the numbers are looking really good. Weve actually passed 3.3 million total users now, at this point. And so in less than a month, weve added 2.3 million people. Fun fact: in the last 24 hours, 50 percent of them have been from Brazil, actually. A lot of people in Brazil are being censored by their Supreme Court there, whos actually ordering journalists to be taken offline by Big Tech companies in the United States and theyre complying. So it is crazy. And so to your other points that you had made: you had mentioned that we had offered a bounty for liberal journalists to come on. We did. We didnt have any takers. And it wasnt just liberals. We were specifically asking for progressives, so very self-described progressives. They didnt really take us up on the offer. Weve kind of dropped it lately because we didnt have anybody coming in. We would have really liked it, though, had they gone for it. But what we have seen is a lot of people on the left, a small portion, right, about 10 percent of our audience is left-leaning, but they are coming in and youre seeing some debate and theyre upset because the left-leaning individuals who are coming in are not.You know theyre a little bit uncouth sometimes and they like to be, they like to joke still. And theyre actually being taken down off of Twitter as well, because theyre joking around or saying things that are not politically correct and that seems to make Twitter angry, and Facebook, and these other companies. And to your point, how do you make it closer to being more Twitter-esque: we dont want to be Twitter-esque, right? We want to beat Twitter because they havent innovated. They havent monetized. Jack Dorsey just recently announced that theyre going to be trying to go for a subscription-based model because they cant seem to be making enough money off their ads. So we have an opportunity to not just build compatible features, but really take on the space of social media as a whole because, you know, people want to be able to reach out; they dont want censorship, but they also want neat tools that Twitter has never been able to provide like groups, like having, you know, basically having cordial conversations you can moderate on your own instead of just what I would call a social dumpster fire. So, you know, really, people need to have a better set of tools to moderate their own experience and not leave it to the platform. So theres a lot of things that we can do. I hope that answered all your questions.

Shimshock: Yeah, now one thing Ive seen recently thats caught my attention are the boycotts of Facebook by major companies that take issue with supposed hate speech pervading the platform. And its unclear how damaging this has been so far, but does Parler foresee its commitment to free speech conflicting with its attempts to fundraise? And if so, how do you plan to overcome that?

Matze: So, yeah, a few things. One is theres the Anti-Defamation League study that came out that said Twitter and Facebook are the two most hateful places on the Internet and Twitter, by a long shot, is not even the number two social media platform online, which is shocking that they were rated so poorly. And so to counter that, that same list listed competitors of Parler that were far fewer in number and we were actually far better ranked. We actually werent ranked at all as being a hateful place. And a lot of that comes down to spam and not having duplicate accounts. We enforce very strictly that you can have one account and thats it. Thats your one account. And as a result, you dont see people coming in with 20 accounts, just attacking people like you do on Twitter. I dont know if youve ever been on Congressman Nunes page, but if youve ever been on his page, its just nasty, nasty stuff. The same with President Trump, too, its just nasty comments. You dont want to be in a place like that; nobody does. And Facebook has this boycott going on right now. Now the boycott -- the corporate boycott -- amounts for something like $50 million a quarter in ad revenue, which to you and I may sound like a lot, but its actually not. Proportionally, its an extremely minor percent of their income. The boycott is not substantial at all. And part of me thinks that, you know, we dont know if this boycott is really just a virtue-signaling technique because these companies are having to cut ad revenue, like a lot of companies are doing right now, because of the pandemic and how its affected their economics. They could be boycotting it because its nice virtue-signaling and free advertising for them because they cant actually afford the ad slots. And no ones really talking about that point either. So theres a few different possibilities. For us, were actually doing really well on the monetization front because we allow political ads during an election year, which Twitter doesnt allow, which is why youre seeing, you know, Parlers actually becoming profitable, even in its infancy, which is unheard of for social media. Whereas these other sites who are, you know, not allowing political ads in an election year are suffering. So were making the right decisions, and were doing the right thing for the community and we believe in the American people and we believe in peoples rights to discuss things on their own, and it seems to be paying off really well.

Shimshock: Gotcha. Now turning against the topic of censorship, I have a rather simple question for you. Has Parler ever and will Parler ever ban anyone for hate speech?

Matze: There is no definition of hate speech legally; there never has been. Theyve attempted to define it and never will. And therefore we cannot. We refuse to ban people on something so arbitrary that it cant be defined. Now, and the reason that I say that is nobody wants hateful content, right? Nobody wants nasty things at them, but everyones definition of hate is different. You and I having a simple agreement could be me viewing this disagreement as a debate or as hateful, whereas you may view it as normal.You may state a fact, saying, hey, I view this to be true. And someone may say that's hateful. So how do you define the undefinable? You can't. The government has tried; they couldn't. The only countries that have have had very arbitrary rules that are rather weak and hard to enforce. And so you see these sites trying to enforce these arbitrary rules and you notice that people are getting kicked off for the most random and arbitrary things like misgendering people. It's absurd. So no, we won't be pursuing that policy.

Shimshock: So I went through Parlers community guidelines and I did note one section called fighting words or threats to harm, defining the concept of fighting words as use of incitements to violence that produce a clear and present danger or a personal assault with the intention of inviting the other party to fisticuffs. But then, as an example, Parler gives any direct and very personal insult with the intention of stirring and upsetting the recipient, i.e., words that would lead to violence if you were to speak in that fashion in person. Now, of course, there is some subjective language in here -- insult, for instance. Then, how do you determine someone's intention? And of course, people have different tolerances, as you mentioned, for language they perceive as hostile. So John, how does Parler hope to maintain a fair and balanced enforcement of these guidelines?

Matze: So tothe point that you just brought up: this is excellent, thank you. So, one is we just hired a chief policy officer who's a real lawyer and not me writing the community guidelines, and she's actually overhauling that specific clause that you brought up because she said it's a really awkward clause to have online. Second, we're currently enforcing the clause through a community jury system. That means we have a quorum of five community jurors, juries of your peers, not Parler as a company, and they judge it independently of each other. They don't know what they have said. They independently judge the situation and then they make a determination. Now, we've said that our community guidelines are a bit of a work in progress, because we're trying to make it fair for everybody. Our goal here is to maximize free speech, maximize online discussion, while maintaining an actual community feel. So, the goal is to allow people to say what they'd like, but also we don't want people breaking the law. We don't want people to get attacked, right? We don't want people threatening violence. There was a whole group of people photoshopping me getting shot through the head. So stuff like that, obviously, is not allowed.

You know, there's lines that we're trying to draw, but we also want people to have conversation. And naturally, as you know, online arguments typically get people angry and using what would be described as fighting words on the street, but not online. And so we're trying to clarify that to make sure that people don't end up in some kind of cyber jail over, you know, an online debate or dispute got heated, if that makes sense.

Shimshock: All right. Yeah. Would one other possible avenue perhaps be ideological diversity in hiring? And so like we've seen, I think, with a couple of these companies, a lack of that seems to be behind some of the Big Tech censorship. I doubt, for instance, Twitter has even one content moderator that voted for Trump.

Matze: We don't have mods. Like I said, we've got a community jury. We don't they're not employees. They're not hired. They're volunteers and they are members of the community. We picked them because they were able to pass a community guidelines test of previous rulings kind of like a, like a historical Supreme Court ruling test, but of Parler violations. And they were able to do really well. I didn't even get a hundred percent on it and I wrote the rules originally. So it's a very, very comprehensive test. We weeded out anybody who was ideologically far-right or far-left; we pick moderates. And we constantly moderate and moderate the moderatorsor the juriesto make sure that they don't have anybody -- like for example, if most people, most moderators say 80 percent of violations are not violations, most moderators, right? And so if we notice somebody says 95 are or we notice somebody says, you know, 60 percent are, or we noticed that they don't line up with the other juries, then we kick them from the pool because we want to make sure that people are actually doing a good job and being legitimate with their moderation.

Shimshock: Great, now one other language specific question and this one about Parlers user agreement. I noticed last month that the company had a provision, number 14, that tasked users with defending and indemnifying Parler, including paying for legal expenses pertaining to their use of the platform. Now that clause along with one preventing users from taking part in a class action lawsuit against Parler appeared to have been removed from the agreement. Can you tell us why that is?

Matze: Yeah, we had, like I said, we hired a new chief policy officer and our goal with hiring her, which she's awesome, by the way, is to take things that were basically templates, because the original community guidelines was a template that we got from our lawyers. They had put this together. They said it's very standard for social media. The indemnification clause really doesn't look very nice. It wasn't that bad of a clause but we said you know what, why don't we do something else? Let's take a look at Twitter's rules. Let's take a look at Facebook's and let's make sure that whatever we have for our rules, we are less strict and we give the user more rights than they do. And so we've actually updated those rules to do that. We've also tried to clarify a lot of the legalese to be more legible, because these things are nearly illegible, if you've ever looked at this stuff. It's a mess. And I'm kind of used to this kind of documentation, and even I'm bored to tears looking at it. So we tried to upgrade it, so it's a little bit more legible too. And so you'll find if you look at our community guidelines and terms of service, if you look at Twitter, if you look at Facebook, if you look at any of the tech tyrants, our rules are more in the favor of users than theirs. Actually, should be all of them. If they're not, bring it to our attention; well make sure that is more in a user's favor than those sites have.

Shimshock: Great, now recently Twitter took down and even penalized the presidents son for sharing a video pertaining to hydroxychloroquine. And this was a video in which a doctor posited that drug as a cure for the coronavirus. Now how has Parler handled that video on its platform?

Matze: We allow it freely. We have, we have a lot of people discussing this topic, including my father, who I adamantly disagree with, who probably would have gotten kicked off of Twitter for his views on -- but, we allow people to talk about it, right? And so I was actually on CNBCs morning show having a debate with them about whether or not we should do that. And I just adamantly said, look it, this person's a doctor, they're making a statement, they're liable for the statement. They could get sued for malpractice, they could lose their job, but they want to say it anyway. That's their right. That's their risk. They're taking it. If they give bad advice, they're going to get sued. Furthermore, if they give bad advice, and they prescribe something to somebody, that's even more of an issue, but this drug is not available over the counter, they have to get prescribed this drug. So a fair debate with the general public -- even if somebody were to get misled, like a lot of these social platforms are contesting -- social publications, I like to call them -- even if they got misled, they still have to go talk to a doctor and get recommended to take the drug and actually get approval to do it. So this is a ridiculous concept that we're censoring this topic. It's completely politically motivated because I think that there's a lot of people on the right who view this as positive hope that there is a solution out there and they want to see positive hope. And they want to see positivity in a time where there's so much negativity in general. And so when you see these social media platforms cracking down, it just makes these people feel more disenfranchised. They feel like they have no freedoms, they can't talk about this. They're not even in control of their own health. And that's wrong. They should be able to talk to people about this. And I feel very passionate, you know, about that, even if I might disagree with somebody on the topic. It's their right to have that debate. It has nothing to do with me; it's their personal health. And as a company, that's what we stand for and believe in, which is people's rights to make these decisions on their own.

Shimshock: Great to hear. Now when discussing tech censorship, people typically only address practices employed by Big Tech giants themselves, such as suspension and shadow banning. And when I say Big Tech giants, I mean the social media companies. But that's only really one segment of the conversation. Over the past few years, we've seen numerous other censorship weapons in action, such as app stores banning apps with problematic points of views, domain registers revoking website licenses, and payment processors nixing user accounts. I know for instance that Gab, another Twitter alternative, has experienced a couple of these problems and those doctors I mentioned earlier, had their website taken down, as well. Now can you walk me through how Parler is prepared to combat these issues?

Matze: Sure, and you'd see at the congressional hearings, you saw that people were pointing out to companies like Apple, because Apple's Tim Cook was there, saying look it, you're giving preferential treatment to some apps. And he claimed that they treat all apps equally, which is obviously, in my opinion, not true. Apple's App Store clause 1.1.1 forbids any apps that might contain objectionable or harmful or hateful -- by no legal definition, but an arbitrary definition -- are not allowed on the App Store. Now that is impossible to maintain on a social media. Impossible. Twitter violates that all the time. And yet Twitter, for example, is Editor's Choice. They are given a special status and treatment by Apple, which actually disproves his claim that they treat all apps equally. Meanwhile, as you had mentioned, that company was banned from the App Store, along with many other companies that have been banned from the App Store, including other apps that I have made have been banned from the App Store on purely ideological reasons. And Parler has so far kind of reached the threshold where I think we're too big to take off the store, at least right now. We have and we're working with them as much as we can to make sure that we don't run into any problems. They have said that we're okay, as long as we continue to moderate our rules that we've set up that are clear, and everybody can read and we don't publish content, were fine. And so as long as we're not publishing content, which we're not, we don't curate it -- it's very chronological; there's no algorithms -- we're fine. But if you take a look at other apps, like Facebook, take a look at Twitter, and you look at their rules, they're not allowed to have, according to 1.1.1, any hateful or obscene or hateful or awful content. Yet Twitter has hundreds of thousands of tweets about hashtag kill all certain groups of people, and that's allowed. So it's really a double standard, and I think it was a little bit misleading, the statements that were made at the congressional hearing, because there is a bias. But the real question is not are they misleading us about their bias? Because we know that they are. The question is: is it their right to have this bias as a private company? And should we do something about it? Personally, I think it's their right to do it. They built these companies; if they want to be biased, they can. I just don't think it's their right to lie or mislead people about their bias. And I also don't think, I don't think it's really morally acceptable to do it, though. I think it's wrong. So that's where I stand on the hearing.

Shimshock: All right, now turning against the larger political scene, I want to ask, we saw the hearing Wednesday with Big Tech executives, like you mentioned, and we have a very short window from now to November, but in your opinion, what can lawmakers do to combat Big Tech election interference?

Matze: They can keep raising awareness and marketing about it. But I don't think that they can do much of anything in that period of time, at all. The only thing they can do is promote competition, which is effectively working the best out of any of it. We have politicians raising more money on Parler than they are on Twitter with the same audiences, with the same numbers even. So you're seeing better conversions on a platform like Parler; you're seeing people come over and in large waves. They're getting better traction, they're getting more reach. Articles are being clicked on and read, which is unheard of right now on these other platforms. So the best thing they can do is promote competition. And I Parlered about that last night saying thank you to all the lawmakers, to all the congressmen and congresswomen, the senators that are on Parler because by supporting a competitive platform, they are effectively making the biggest impact they can, you know, on promoting competition and solving this problem.

Shimshock: All right. And lastly, what's next for Parler in the next coming months and then going into 2021?

Matze: Next, we want groups. That is our big thing that we want to do. We want groups. Now, a timeline, I can't guarantee anything. But we would love to replace our Discover page with groups. And we're working really hard on doing that because people need a place to have conversations with one another to organize events and these keep getting shut down elsewhere. We need to have that.

Shimshock: Great. Well, thanks so much for your time, John, and best of luck with Parler.

Matze: Thank you. Take care.

Rob Shimshockis the commentary editor atCNSNews.com. He has covered education,culture, media, technology, and politics for a variety of national outlets, hosted theCampus UnmaskedYouTube show, and was named toThe Washington Examiner's"30 Under 30" list. Shimshock graduated from the University of Virginia with a Bachelor of Arts in English and Media Studies.

View original post here:

Exclusive: Parler Rejects 'Hate Speech' Bans, Will Fix 'Awkward' 'Fighting Words' Rule - CNSNews.com

Artificial Intelligence in Business: The New Normal in Testing Times – Analytics Insight

The COVID 19 situation, has rendered the industry into an unprecedented situation. Businesses across the globe are now resorting to plan out new strategies to keep the operations going, to meet clients demands.

Work-from-Home is the new normal for both the employees and the employers to function in a mitigated manner. Twitter on their tweet had suggested their employees, to function through Work-from-Home, forever, if they want to. This new trend can be easily surmised as being effective for a while to manage operations, but cannot be ruled out as the necessary solution, for satisfying the customers and clients in the long run.

Companies need to employ ethically approved ideas and strategies that would assure employees, clients, and customers, without breaching the data.

With the present situation, where social distancing is a must, classroom training cannot be ruled out as the plausible solution for training employees. Thats where Virtual Reality comes into play.

Virtual Reality (VR), which was earlier ruled out to be used in the gaming interface has now the potential to become the face of the industrial enterprise. Areportby PwC states that VR and Augmented Reality has the potential to surge US$1.5trillion globally by the year 2030. Another report by PwC states that VR can train employees four times faster than classroom training. Individuals trained through VR has confidence 2.5 times more than those who are trained through classroom programs or e-courses, and 2.3 times more emotionally inclined towards the content that they are working on. Employees trained using VR are also 1.5 times more focused than that through classroom programs and e-courses.

The only drawback in using PwC will be in its cost-effectiveness as it is 47 percent costlier than classroom courses.

Ever since its evolution, one of the major concerns regarding AI amongst clients, customers, and employees is the breach of ethical AI practices. A report byCapgemini Research Institutestates that amongst 62% of customers who were surveyed would like to place their trust in an organization that practices AI ethically.

For any organization to keep its business and employees safe during the time of crisis, the development of an ethically viable AI is a must. This can only be achieved by practicing ethical use of AI applications, informing and educating the customers about the practices of AI.

Areportby PwC, states that planning out a new strategy in both data and technology, evaluating the ethical flaws associated with the existing data, and only collecting the required amount of data, would help in maintaining trust amongst both the customers and employees.

Given the present situation, sales executives are facing a daunting task of maintaining their operations. However, the use of AI can easily redeem this time consuming and laborious task. Withthe use of an AI algorithm, the sales executive or manager can identify the higher probable inclination of the client towards a particular service. The AI algorithm would also, help in offering a new product according to the pre-requisite preferences of the client.

In the time of crisis, new solutions must be thought about for repurposing business. PwC states that this can be achieved by repurposing business assets, forming a new business partnership, rapid innovation, and testing and learning.

This will not only help in building trust amongst employees but also build resilience within the organization, for the future endeavor.

See more here:
Artificial Intelligence in Business: The New Normal in Testing Times - Analytics Insight

Turkey takes German’s hate speech law, and makes it much worse with its own censorship and data localization rules – Privacy News Online

Last month we wrote about Frances hate speech law, and noted that it followed in the footsteps of the earlier German law known as NetzDG (short for Netzwerkdurchsetzungsgesetz, or network enforcement law). NetzDG was bad news not just for German freedom of speech, but for human rights around the world, because of its knock-on effects. Once Germany had set a precedent for censoring the Internet, it was much easier for other countries to do the same. When people complained, governments could say that if it was acceptable for a liberal democracy like Germany, it was good enough for them. A report from Justitia, a think tank in Denmark, shows just how pernicious the influence of the NetzDG has been:

at least 13 countries have adopted or proposed models similar to the NetzDG matrix. According to Freedom Houses Freedom on the Net (2019), five of those countries are ranked not free (Honduras, Venezuela, Vietnam, Russia and Belarus), five are ranked partly free (Kenya, India, Singapore, Malaysia and Philippines), and only three ranked free (France, UK and Australia). Most of these countries have explicitly referred to the NetzDG as a justification for restricting online speech. Moreover, several of these countries, including Venezuela, Vietnam, India, Russia, Malaysia, and Kenya, require intermediaries to remove vague categories of content that include fake news, defamation of religions, anti-government propaganda and hate speech that can be abused to target political dissent.

One more can now be added to the list. Turkey has just passed what the Electronic Frontier Foundation calls the worst version of Germanys NetzDG yet. Although its unfortunate that a regional leader like Turkey has brought in this law, its hardly a surprise. Turkey has a terrible record for freedom of speech: it is ranked 154th out of 180 countries in the RSF 2020 World Press Freedom Index. In 2018, its courts blocked access to around 3000 articles, including those on political corruption and human rights violations. Turkey has a track record of repeatedly blocking online companies like Facebook, YouTube and Twitter. Its government also brought in a VPN ban, and blocked the whole of Wikipedia.

One reason for these continuing attacks on freedom of speech is that Turkeys President, Recep Tayyip Erdogan, is notoriously thin-skinned. For example, a Turkish citizen who simply shared a meme comparing Erdogans facial expressions with Gollum from Lord of the Rings was not only hit with a suspended sentence, but lost custody of his children. The new censorship law also seems to have been brought in partly for personal reasons, as Al Jazeera reports:

President Recep Tayyip Erdogan, who has greatly concentrated powers into his own hands during 17 years in office, pledged this month to bring social media platforms under control following a series of tweets that allegedly insulted his daughter and son-in-law after they announced the birth of their fourth child on Twitter. At least 11 people were detained for questioning over the tweets.

The new law was passed extremely quickly: barely a month passed from its announcement to its approval.

The EFF has provided a good summary of its main features. They include requiring social media platforms that have more than two million daily users to appoint a local representative in Turkey. This is similar to the approach taken by Brazil in its new fake news law, discussed by Privacy News Online a few weeks ago. The penalties for failing to do so can be steep: they include advertisement bans, heavy financial penalties, and bandwidth reductions. The legislation allows Turkish courts to order Internet providers to throttle social media platforms bandwidth by up to 90%, in effect blocking access to those sites. Once local representatives are in place, they are responsible for blocking or taking down content when ordered to do so by the Turkish government.

Social media companies will also be required to remove content that allegedly violates personal rights and the privacy of personal life within 48 hours of receiving a court order, or face steep fines. Measures to protect privacy are to be welcomed, generally; however, these sound dangerously vague. Its easy to imagine them being abused by the rich and powerful who want true but embarrassing material removed. Another requirement is for social media platforms to store user data locally. It is likely that Turkish authorities will use this to demand details about people posting items that displease Erdogan, for example. In order to avoid that risk, many Turkish social media users will probably prefer to engage in self-censorship, which is doubtless the outcome the authorities want here.

Freedom of speech in Turkey has been under attack for years, and the new law is likely to exacerbate the existing problems. Given Erdogans grip on power, theres not much that can be done about that for the moment. The worry has to be that if these new measures choke off online dissent in Turkey, as seems likely, it will encourage other repressive governments to adopt a similar approach elsewhere.

Featured image by Mstyslav Chernov.

Originally posted here:

Turkey takes German's hate speech law, and makes it much worse with its own censorship and data localization rules - Privacy News Online

Indian Government Actively Working Toward New Crypto Ban – Cointelegraph

An Indian government official has claimed that two ministries and the Reserve Bank of India are actively working on a legal framework to ban cryptocurrencies on the subcontinent.

According to an Aug. 4 report from Indian news website Moneycontrol, authorities in India are making preparations to pass a law banning cryptocurrency trading. The site quoted an anonymous official as saying that consultations between the Ministry of Electronics and Information Technology, the Ministry of Law and Justice, and the Reserve Bank of Indiahad begun regarding the framework of such a law.

Once Parliament resumes for the session, we are hoping to get [the law] ratified,the official said. Parliament is expected to reconvene in late August or early September.

The official stated that the government was considering banning crypto through legislative change rather than methods such as the blanket ban from the RBI for banks dealing with crypto firms because it would be more binding. It will clearly define the illegality of the trade, the person said.

In March, the Supreme Court of India struck down a blanket ban on banks dealing with crypto businesses thathad been imposed by the RBI since July 2018. The repeal led to a boom in new exchanges across the country.

However, government officials have been floating the idea of enacting a new law not allowing cryptocurrencies in India in place of the RBI ban.

Ashish Singhal, founder and CEO of Indian cryptocurrency exchange CoinSwitch, said that the chances that the government would impose a blanket ban on digital currencies weremore likely in 2019 than this year. He said there has been a change in the way crypto is perceived across India, hopefully for the better.

Though many parts of India still face some restrictions on movement due to the pandemic since a lockdown was ordered in March, crypto exchanges in the country reported strong growth as some investors moved away from traditional assets.

Cointelegraph reported in May that India-based exchange CoinDCX had ten times the average number of users sign up in the week after the RBI ban was lifted as well as47% growth for Q1 2020. Trading platform WazirX also recorded a month-on-month growth of over 80% in both March and April. Additionally, United States-based crypto exchange Coinbase entered the Indian market, offering crypto-to-crypto conversions and trading services from April onward.

Moneycontrol said that millions of dollars worth of business in cryptocurrency is being done every week, with the lockdown pushing up the volumes.

A growing number of investors have found refuge in virtual currencies as traditional assets have taken a beating over worries about the health of the economy battered by the coronavirus outbreak.

See the original post here:
Indian Government Actively Working Toward New Crypto Ban - Cointelegraph

Ripple Reveals New XRP Investment in Push for Mass Adoption of Cryptocurrency Banking Platform – The Daily Hodl

Ripples investment arm Xpring is throwing three more years of support into the development studio XRPL Labs and its crypto banking platform Xumm in a push for mass adoption.

Warren Paul Anderson, Xprings head of developer relations, says the company views the Xumm platform as one of the best representations of the XRP Ledger. The banking app allows users to hold and spend Ripples native token XRP with the goal of allowing people to be their own bank. In the long haul, XRPL Labs says Xumm plans to give people a way to spend dollars, euros, and XRP without the assistance of a financial institution.

First unveiled in 2019, Xumm has been in public beta since March. XRPL Labs says Ripples continued support will help drive the widespread adoption of its XRP-focused platform.

With Xpring supporting XRPL Labs with an additional investment to support the next three years of growth and development of its Xumm App & Platform, XRPL Labs will be able to focus on their road map, working towards adoption, XRP ledger accessibility & building the bridge between consumers, businesses and developers.

The startups new mid-term to long-term roadmap for Xumm includes adding fiat on and offramps while also introducing a new amendment to the XRP Ledger called Hooks. The proposed change will offer new business logic functionalities such as automatic saving and tipping as well as blocking of transactions related to scam activities.

Featured Image: Shutterstock/Willehard Korander

See the article here:
Ripple Reveals New XRP Investment in Push for Mass Adoption of Cryptocurrency Banking Platform - The Daily Hodl

Esports Sportsbook Rivalry Announces Roll Out of Cryptocurrency Payments in Partnership with CoinCorner – European Gaming Industry News

Reading Time: 2 minutes

Isle of Man licensed betting platform, Rivalry, is the latest esports focused sportsbook to adapt to changing player preferences by supporting payments through Bitcoin. COVID-19 has, in many cases, served as an accelerant to slow-burning changes in consumer behaviour and preferences. Most notably: the meteoric rise of esports betting. While many operators were left scrambling to adapt to the overnight demand, a select few, such as Rivalry, were perfectly positioned to capitalize on the pandemic pivot. Much like cryptocurrencies, esports betting is proving to be anything but a fad.

Forward-thinking operators like Rivalry believe esports provide a glimpse into the future of betting. Their average esports bettor is in their twenties an early adoptor and digital native that is no stranger to Bitcoin, but maybe less so to traditional betting mechanics and terminology. This brings new opportunities for operators to reshape and reimagine the betting experience.

Rivalry CEO, Steven Salz adds Our integration with CoinCorner has allowed us to offer Bitcoin as a payment option to our players and further simplify the payment experience. Its partnerships like these that help us evolve the betting experience for a new cohort of bettors that think and behave differently.

UK Bitcoin exchange, CoinCorner, has seen business appetite adjusting to accommodate the recent rise in demand for online banking and payment methods, with an uptick in businesses keen to introduce Bitcoin payments as a way to gain competitive advantage by opening up to new markets, receiving cheaper fees and no chargebacks.

Sam Tipper, Business Development Manager at CoinCorner, said: Since I joined CoinCorner in 2019, weve been making huge strides in assisting businesses across multiple industries to accept Bitcoin preparing them for the new reality of a changing payment landscape. This is particularly true for businesses regulated out of the Isle of Man, like Rivalry, who cater to a global market.

While the gambling industry debates on the staying power of esports as a profitable betting market, operators like Rivalry are adapting with ease and transforming the betting experience to meet the needs of an evolving user base.

Related

Continued here:
Esports Sportsbook Rivalry Announces Roll Out of Cryptocurrency Payments in Partnership with CoinCorner - European Gaming Industry News

Bitcoin Reached Its Highest In Close To A Year Last Month – Forbes

Bitcoin prices climbed in July, reaching their highest in close to one year. When will the world's ... [+] most prominent digital currency reach a fresh, record high? (Photo by Chesnot/Getty Images)

Bitcoin prices enjoyed some notable gains last month, climbing nearly 27% in roughly two weeks and hitting their highest since August 2019.

The worlds most prominent digital currency climbed to $11,467.95 on July 31, according to CoinDesk price data.

At this point, it was trading at its loftiest value since August 12, 2019, additional CoinDesk figures reveal.

The cryptocurrency experienced a relative malaise for the first two weeks of July, moving within a reasonably tight range between $8,900 and $9,600, before climbing 26.7% between July 16 and July 31.

[Ed note: Investing in cryptocoins or tokens is highly speculative and the market is largely unregulated. Anyone considering it should be prepared to lose their entire investment.]

After experiencing this run up, the digital asset fell back slightly, finishing the month up 24.4%.

Because of these gains, bitcoin experienced its most impressive July in eight years, according to CoinDesk.

Several Bullish Factors Drive Gains

When explaining the upside that bitcoin experienced in July, analysts pointed to several variables.

Tim Enneking,managing director ofDigital Capital Management, previously noted that the correlation between bitcoin and the U.S. equities was absurdly high for several months, and it was impossible for the digital currency to surpass the $10,000 level without that price correlation declining significantly.

However, once the price relationship between bitcoin and U.S. equities broke down, it triggered a rotation from alts into BTC, he noted.

Add to that the dollar at record lows and gold at record highs, and BTC has a *lot* of healthy tailwinds right now, stated Enneking.

Ethereums Contribution

Joe DiPasquale, CEO of cryptocurrency hedge fund managerBitBull Capital, also weighed in, emphasizing that while the aforementioned variables helped drive bitcoins price movements during July, we can't ignore the role played by Ethereum in resurrecting market enthusiasm.

The DeFi space kicked off this change in sentiment with projects like Compound and Balancer doing well, both in terms of adoption and the market, he noted.

Following increasing demand, ETH started looking alive and posted double digit gains, outperforming Bitcoin and other alts roughly two weeks ago, said DiPasquale.

These developments also resulted in positive sentiment returning to the market and helping Bitcoin rally past $10k, he stated.

Central Bank Influence

While Jack Tao, CEO of cryptocurrency derivatives trading platform Phemex, concurred with Ennekings assessment, he also stressed the impact that central bank policy is having on the digital currency markets.

Theres always a pool of investment money that moves around, navigating through different assets and markets seeking the best opportunities to make profits.

Any time one area doesnt do well or is projected to decline, the money simply shifts into a more promising sector, he noted.

With the high levels of quantitative easing in the U.S., concerns about inflation compel investors to find alternatives such as gold or opportunities within the crypto field, said Tao.

He added that a new technology, project, or announcement creates excitement and drives investors into crypto, exactly like Ethereum is doing now.

However, ultimately much of this money almost always eventually ends up finding its way back into BTC.

Bitcoin A Macro Asset, Says Analyst

Jeff Dorman, chief investment officer of asset managerArca, offered a different perspective when explaining the digital currencys gains in July.

I don't think correlation or resistance had anything to do with Bitcoin's move higher, he stated.

Bitcoin is now much less of a digital asset, and more of a macro asset, now that institutions are pouring in (as evidenced by large volume growth at CME, Bakkt and Fidelity), said Dorman.

And as a macro asset, the breakout was inevitable, as the inflation protection/store of value thesis was working everywhere else gold hit all-time highs, silver hit 8-year highs, the 2s/30s Treasury curve began to steepen, and tech stocks continue to soar.

Bitcoin was simply playing catch-up.

Disclosure: I own some bitcoin, bitcoin cash, litecoin, ether and EOS.

The rest is here:
Bitcoin Reached Its Highest In Close To A Year Last Month - Forbes

Bitcoin Cash Difficulty Algorithm Debate Heats Up With Fears of Another Chain Split – Bitcoin News

With a touch more than three months left until the next Bitcoin Cash upgrade, crypto proponents have been witnessing a new quarrel rise after last years contentious Infrastructure Funding Proposal (IFP). This time around, the tensions derive from the Difficulty Algorithm Adjustment (DAA) discussion which is a conversation about replacing the networks current DAA.

Every six months the BCH community plans for an upgrade and this coming November, a number of users are concerned about another chain split. There is a lot of infighting within the community at present and among BCH developers as well. The story allegedly derives from the DAA discussion, but there has been tension ever since the last quarrel over the IFP.

A Difficulty Algorithm Adjustment (DAA) is basically an algorithm that adjusts the mining difficulty parameter. Bitcoin (BTC) adjusts the mining difficulty parameter every 2016 blocks, but on August 1, 2017, Bitcoin Cash (BCH) added an Emergency Difficulty Adjustment (EDA) algorithm that ran alongside the DAA. In November 2017, the DAA was changed on the BCH chain to adjust the mining difficulty parameter after every block. It also leverages a moving window of the last 144 blocks in order to calculate difficulty.

During the last year and a half, people have been complaining about the DAA as people believe it can be gamed. In the last year, the DAA subject has come up often and just recently the conversation has become more contentious. Recently, software developer Jonathan Toomim introduced a DAA concept called Aserti3-2d and the specification is available on Gitlab. The BCHN full node team has the code hosted on the Bitcoin Cash upgrade specifications page.

On July 23, 2020, Bitcoin ABC developer Amaury Schet announced the DAA called Grasberg via the Bitcoin ABC blog website. Following the release, Toomim published an article on the read.cash blog that argues against Grasberg. The engineer also described how members of the development teams have been squabbling in various online discussions. Toomim asserts that Grasberg is a big step on the path to corruption and it was not properly simulated.

On August 3, Bitcoin Cash developers met for a DAA meeting and BCHD developer, Chris Pacia, tweeted that the meeting did not go so well. Bitcoin Cash developer meeting blew up with multiple people walking out, Pacia tweeted after the meeting. Following Pacias statement, Ethereums Vitalik Buterin discussed the subject at length with BCH supporters from both sides of the argument.

I dont understand BCH people care so much about difficulty adjustment minutiae. I would say just use ethereums but honestly your algo is fine as is, Buterin tweeted. I will be honest; being optimistic that BCH development would improve once they got Craig out definitely is looking like one of my worst predictions, the Ethereum developer added.

Discussions about the quarrels between developers who work on the Bitcoin ABC implementation and the BCHN full node project are littered all over the Reddit forum r/btc. Additionally, there are lots of discussions on the read.cash blog and BCH fans are discussing the issue on Twitter as well. Most of the arguments pit the BCHN developers against the ABC developers, alongside the pros and cons of both Jonathan Toomims Asert DAA and the Grasberg DAA.

On August 5, 2020, a consortium of node implementations, infrastructure providers, services, engineers, and stakeholders published a post on the read.cash blog which explained that a number of actors will deploy the aserti3-2d difficulty adjustment algorithm (Asert DAA). We will deploy the aserti3-2d difficulty adjustment algorithm (Asert DAA) on Bitcoin Cash (BCH) on November 15th, 2020, as designed by Mark Lundeberg and implemented by Jonathan Toomim alongside other accredited contributors of the ecosystem, the consortium wrote. The announcement added:

The Aserti3-2d DAA is simple to implement, well-tested, and extensively simulated. It incentivizes consistent mining, achieves stability for transaction confirmations with low-variance 10-minute block targets, and is resistant to future drift.

The consortium announcement was digitally signed by Andrea Suisani (Bitcoin Unlimited), Andrew Stone (BU), Axel Gembe (Electron Cash), BCHD, Bitcoin Cash Node (BCHN), Calin A. Culianu (Electron Cash), Cashaddress.org, Cashfusion, Cashshuffle, Corentin Mercier (bitcash), Dagur Valberg Johannsson (BCHN, BU), Electron Cash, Fernando Pelliccioni (Knuth node), Freetrader (BCHN), Imaginary_username, James Cramer (SLP), John Nieri (General Protocols), Jonathan Silverblood (CashAccounts), Jonathan Toomim, Josh Green (Bitcoin Verde), Mark B. Lundeberg, Pokkst (bitcoincashj), Rosco Kalis (Cashscript), Tom Zander (Flowee), and Oscar Salas of Instabitcoin.net.

Many BCH supporters have said they dont want to see a split, while others believe that a split is inevitable. Bitcoin.coms CEO Dennis Jarvis discussed the situation on Twitter and said that the situation was sad to hear.

I hope everyone can come back together to work on the future roadmap. There are no good outcomes from forking/splitting for anyone who believes in the long-term value and usefulness of Bitcoin Cash, Jarvis tweeted. Bitcoin.coms CTO Emil Oldenburg also gave his opinion on Twitter.

A chain split would be terrible for BCH, Oldenburg said. We want BCH to win by being the easiest, most used, and most convenient payment option. Not win the crypto Darwin awards.

Its uncertain what will happen come November when the upgrade is planned if the signatories mentioned above choose to go with the aserti3-2d DAA and if ABC chooses to roll with Grasberg. Moreover, in ten days it is expected that a code freeze will take place on August 15, as it usually happens before the official upgrade.

Additionally, Viabtcs founder Yang Haipos Weibo account allegedly said that Coinex and Viabtc will initiate a fork as well by leveraging the ticker BCC. On August 5, 2020, Bitcoin ABC developer Amaury Schet tweeted about Yang Haipos statements.

Viabtcs [Yang Haipo] announced a fork of Bitcoin Cash under the ticker BCC, Schet tweeted on Wednesday. This is unfortunate, but also an amazing opportunity for those who have been unhappy with how things are going. Some will want to start a war. Those who want freedom must not let them.

What do you think about the arguments that are happening between Bitcoin Cash developers and community members? Let us know what you think in the comments section below.

Image Credits: Shutterstock, Pixabay, Wiki Commons

Disclaimer: This article is for informational purposes only. It is not a direct offer or solicitation of an offer to buy or sell, or a recommendation or endorsement of any products, services, or companies. Bitcoin.com does not provide investment, tax, legal, or accounting advice. Neither the company nor the author is responsible, directly or indirectly, for any damage or loss caused or alleged to be caused by or in connection with the use of or reliance on any content, goods or services mentioned in this article.

Continued here:
Bitcoin Cash Difficulty Algorithm Debate Heats Up With Fears of Another Chain Split - Bitcoin News

Evil AI: These are the 20 most dangerous crimes that artificial intelligence will create – ZDNet

From targeted phishing campaigns to new stalking methods: there are plenty of ways that artificial intelligence could be used to cause harm if it fell into the wrong hands. A team of researchers decided to rank the potential criminal applications that AI will have in the next 15 years, starting with those we should worry the most about. At the top of the list of most serious threats? Deepfakes.

By using fake audio and video to impersonate another person, the technology can cause various types of harms, said the researchers. The threats range from discrediting public figures to influence public opinion, to extorting funds by impersonating someone's child or relatives over a video call.

The ranking was put together after scientists from University College London (UCL) compiled a list of 20 AI-enabled crimes based on academic papers, news and popular culture, and got a few dozen experts to discuss the severity of each threat during a two-day seminar.

The participants were asked to rank the list in order of concern, based on four criteria: the harm it could cause, the potential for criminal profit or gain, how easy the crime could be carried out and how difficult it would be to stop.

Although deepfakes might in principle sound less worrying that, say, killer robots, the technology is capable of causing a lot of harm very easily, and is hard to detect and stop. Relative to other AI-enabled tools, therefore, the experts established that deepfakes are the most serious threat out there.

There are already examples of fake content undermining democracy in some countries: in the US, for example, a doctored video of House Speaker Nancy Pelosi in which she appeared inebriated picked up more than 2.5 million views on Facebook last year.

UK organization Future Advocacy similarly used AI to create a fake video during the 2019 general election, which showed Boris Johnson and Jeremy Corbyn endorsing each other for prime minister. Although the video was not malicious, it stressed the potential of deepfakes to impact national politics.

The UCL researchers said that as deepfakes get more sophisticated and credible, they will only get harder to defeat. While some algorithms are already successfully identifying deepfakes online, there are many uncontrolled routes for modified material to spread. Eventually, warned the researchers, this will lead to widespread distrust of audio and visual content.

Five other applications of AI also made it to the "highly worrying" category. With autonomous cars just around the corner, driverless vehicles were identified as a realistic delivery mechanism for explosives, or even as weapons of terror in their own right. Equally achievable is the use of AI to author fake news: the technology already exists, stressed the report, and the societal impact of propaganda shouldn't be under-estimated.

Also keeping up AI experts at night are applications that will be so pervasive that defeating them will be near-impossible. This is the case of AI-infused phishing attacks, for example, which will perpetrated via crafty messages that will be impossible to distinguish from reality. Another example is large-scale blackmail, enabled by AI's potential to harvest large personal datasets and information from social media.

Finally, participants pointed to the multiplication of AI systems used for key applications like public safety or financial transactions and to the many opportunities for attack they represent. Disrupting such AI-controlled systems, for criminal or terror motives, could result in widespread power failures, breakdown of food logistics, and overall country-wide chaos.

UCL's researchers labelled some of the other crimes that could be perpetrated with the help of AI as only "moderately concerning". Among them feature the sale of fraudulent "snake-oil" AI for popular services like lie detection or security screening; or increasingly sophisticated learning-based cyber-attacks, in which AI could easily probe the weaknesses of many systems.

Several of the crimes cited could arguably be seen as a reason for high concern. For example, the misuse of military robots, or the deliberate manipulation of databases to introduce bias, were both cited as only moderately worrying.

The researchers argued, however, that such applications seem too difficult to push at scale in current times, or could be easily managed, and therefore do not represent as imminent a danger.

At the bottom of the threat hierarchy, the researchers listed some "low-concern" applications the petty crime of AI, if you may. On top of fake reviews or fake art, the report also mentions burglar bots, small devices that could sneak into homes through letterboxes or cat flaps to relay information to a third-party.

Burglar bots might sound creepy, but they could be easily defeated in fact, they could pretty much be stopped by a letterbox cage and they couldn't scale. As such, the researchers don't expect that they will cause huge trouble anytime soon; the real danger, according to the report, rather lies in criminal applications of AI that could be easily shared and repeated once they are developed.

UCL's Matthew Caldwell, first author of the report, said: "Unlike many traditional crimes, crimes in the digital realm can be easily shared, repeated, and even sold, allowing criminal techniques to be marketed and for crime to be provided as a service. This means criminals may be able to outsource the more challenging aspects of their AI-based crime."

The marketisation of AI-enabled crime, therefore, might be just around the corner. Caldwell and his team anticipate the advent of "Crime as a Service" (CaaS), which would work hand-in-hand with Denial of Service (DoS) attacks.

And some of these crimes will have deeper ramifications than others. Here is the complete ranking of AI-enabled crimes to look out for, as compiled by UCL's researchers:

AI-enabled crimes of high concern:

Deepfakes; driverless vehicles as a weapon; tailored phishing; disrupting AI-controlled systems; large-scale blackmail; AI-authored fake news.

AI-enabled crimes of moderate concern:

Misuse of military robots; snake oil; data poisoning; learning-based cyber-attacks; autonomous attack drones; denial of access to online activities; tricking face recognition; manipulating financial or stock markets.

AI-enabled crimes of low concern:

Burglar bots; evading AI detection; AI-authored fake reviews; AI-assisted stalking; forgery of content such as art or music.

Visit link:
Evil AI: These are the 20 most dangerous crimes that artificial intelligence will create - ZDNet