Artificial Intelligence and the Insurer – Lexology

No longer used solely by innovative technology companies, AI is now of strategic importance to more risk-averse sectors such as healthcare, retail banking, and even insurance. Built upon DAC Beachcrofts depth of experience in advising across the insurance market, this article explores a few ways in which artificial intelligence is changing the insurance industry.

How might AI change insurance?

Artificial intelligence (AI) is an increasingly pervasive aspect of modern life, thanks to its role in a wide variety of applications. The technological advancement and applicability of AI systems has exploded due to, cheaper data storage costs, increased computing resources, and an ever-growing output of and demand for consumer data. As such, we expect to see change in several critical aspects of the insurance industry.

Of course, it is important to note that insurance is a large and complex industry. Even in light of the perceived advantages discussed above, insurers may not always find it easy to integrate AI within products or backend systems. A Capgemini survey revealed that as of 2018, only 2 per cent of insurers worldwide have seen full-scale implementation of AI within their business, with a further 34% still in ideation stages. Furthermore, there are important ethical considerations which have yet to be addressed, with critics warning that AI could lead to detrimental outcomes, especially in relation to personal data privacy and hyper-personalised risk assessments. While more work needs to be done to understand the various implications of AI in insurance, it nevertheless remains an important and fascinating space to watch.

Go here to see the original:
Artificial Intelligence and the Insurer - Lexology

Eyenuk Successfully Fulfills Contract Awarded by Public Health England for Artificial Intelligence Grading of Retinal Images – BioSpace

60,000 Patient Image Sets from 6 Different Diabetic Eye Screening Programmes Analyzed Using EyeArt AI Eye Screening System

LOS ANGELES--(BUSINESS WIRE)-- Eyenuk, Inc., a global artificial intelligence (AI) medical technology and services company and the leader in real-world applications for AI Eye Screening, announced that it has successfully fulfilled the contract awarded by Public Health England (PHE) to use Eyenuks EyeArt AI Eye Screening System to grade 60,000 patient image sets from 6 different National Health Service (NHS) Diabetic Eye Screening Programmes in England.

Diabetic retinopathy (DR) is a vision-threatening complication of diabetes and a leading cause of preventable vision loss globally.1 In England, an estimated 4.6 million are living with diabetes, one-third of whom are at risk of developing DR. Diabetes has become a growing health concern as the number of people diagnosed with diabetes in the U.K. has more than doubled in the last 20 years.2

The U.K. has been leading the world in diabetic retinopathy screening, achieving patient uptake rates of over 80% (screening nearly 2.5 million diabetes patients annually),3 as compared with most parts of the world where typically less than half of diabetes patients receive annual eye screening.4 As a result, diabetic retinopathy is no longer the leading cause of blindness in the working age group in England.5 However, the growing diabetes population poses significant challenges ahead.

Public Health England (PHE) is an executive agency of the Department of Health and Social Care (DH) that oversees the NHS national health screening programmes. An independent Health Technology Assessment from the Moorfields Eye Hospital to determine the screening performance and cost-effectiveness of multiple DR detection AI solutions was conducted and published in 2016.6 Subsequently, PHE initiated a tender process seeking to commission an automated retinal image grading software to grade 60,000 patient image sets from multiple diabetic eye screening programmes.

At the end of the competitive tender process, the contract was awarded to Eyenuk.7 The National Diabetic Eye Screening Programme (NDESP) identified 6 local diabetic eye screening (DES) programmes to participate in the project with Eyenuk. The project aim was to compare the number of image sets categorised as having no disease, as determined by human graders (manual programme grading), with the number as determined by the EyeArt AI eye screening system. Results from this latest real-world analysis, together with results from previous assessments have shown that the EyeArt system has excellent agreement and sensitivity and specificity for detecting diabetic retinopathy.

Eyenuk was honored to have been awarded the PHE contract for diabetic retinopathy grading, and we are gratified that our EyeArt AI system delivered excellent results when compared with six DES programmes in England, said Kaushal Solanki, Ph.D., founder and CEO of Eyenuk. We look forward to expanding our work in the U.K. with hope to support all diabetic eye screening programmes in the future.

The independent Health Technology Assessment (HTA) from Moorfields Eye Hospital involving more than 20,000 patients was conducted to determine the screening performance and cost-effectiveness of multiple automated retinal image analysis systems. This study demonstrated that the EyeArt AI System delivered much higher sensitivity (i.e., patient safety) for DR screening than other automated DR screening technologies investigated and that its use is cost-effective alternative to the current, purely manual grading approach. The HTA demonstrated that the EyeArt performance was not affected by ethnicity, gender, or camera type.

About the EyeArt AI Eye Screening System

The EyeArt AI Eye Screening System provides fully automated DR screening, including retinal imaging, DR grading on international standards and the option of immediate reporting, during a diabetic patients regular office visit. Once the patients fundus images have been captured and submitted to the EyeArt AI System, the DR screening results are available in a PDF report in less than 60 seconds.

The EyeArt AI System was developed with funding from the U.S. National Institutes of Health (NIH) and is validated by the U.K. National Health Service (NHS). The EyeArt AI System has CE marking as a class IIa medical device in the European Union and a Health Canada license. In the U.S., the EyeArt AI System is limited by federal law to investigational use. It is designed to be General Data Protection Regulation (GDPR) and Health Insurance Portability and Accountability Act of 1996 (HIPAA) compliant.

VIDEO: Learn more about the EyeArt AI Eye Screening System for Diabetic Retinopathy

About Eyenuk, Inc.

Eyenuk, Inc. is a global artificial intelligence (AI) medical technology and services company and the leader in real-world AI Eye Screening for autonomous disease detection and AI Predictive Biomarkers for risk assessment and disease surveillance. Eyenuks first product, the EyeArt AI Eye Screening System, is the most extensively validated AI technology for autonomous detection of DR. Eyenuk is on a mission to screen every eye in the world to ensure timely diagnosis of life- and vision-threatening diseases, including diabetic retinopathy, glaucoma, age-related macular degeneration, stroke risk, cardiovascular risk and Alzheimers disease. Find Eyenuk online on its website, Twitter, Facebook, and LinkedIn.

http://www.eyenuk.com

1 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4657234/ 2 https://www.diabetes.org.uk/about_us/news/diabetes-prevalence-statistics 3 https://www.gov.uk/government/publications/diabetic-eye-screening-2016-to-2017-data 4 K. Fitch, T. Weisman, T. Engel, A. Turpcu, H. Blumen, Y. Rajput, and P. Dave. Longitudinal commercial claims-based cost analysis of diabetic retinopathy screening patterns. Am Health Drug Benefits. 2015;8(6):300308.5 G. Liew, M. Michaelides, C. Bunce. A comparison of the causes of blindness certifications in England and Wales in working age adults (1664 years), 19992000 with 20092010. BMJ Open Bd. 4 (2014), Nr. 26 Adnan Tufail, Venediktos V Kapetanakis, Sebastian Salas-Vega, Catherine Egan, Caroline Rudisill, Christopher G Owen, Aaron Lee, et al. An Observational Study to Assess If Automated Diabetic Retinopathy Image Assessment Software Can Replace One or More Steps of Manual Imaging Grading and to Determine Their Cost-Effectiveness. Health Technology Assessment 20, no. 92 (December 2016). https://doi.org/10.3310/hta20920 7 https://www.contractsfinder.service.gov.uk/Notice/13b069bd-97b4-40b6-ac66-337d1526d1e6

View source version on businesswire.com: https://www.businesswire.com/news/home/20200415005222/en/

See the original post here:
Eyenuk Successfully Fulfills Contract Awarded by Public Health England for Artificial Intelligence Grading of Retinal Images - BioSpace

COVID-19 and privacy: artificial intelligence and contact tracing in combatting the pandemic – Lexology

COVID-19 is having a debilitating effect on peoples health and their economic well-being. People are being forced by social distancing/isolating edicts and provincial emergency closure orders to stay home. As we slowly look like we may be emerging from the first wave of this health and economic emergency, people are rightly asking how we can gradually start to re-open the economy and resume semblances of normalcy without triggering substantial negative health rebounds or violating privacy norms or rights.

Governments, medical practitioners, researchers, policy-makers and others have been feverishly pursuing solutions to this challenge. Medical solutions such as vaccines and treatment methods including the use of antibodies and experimental medications such as placenta-based cell-therapy are being pursued with understandable urgency. Testing for COVID-19 and persons with COVID-19 antibodies to identify lower risk groups of individuals for whom the emergency measures could be relaxed is an obvious strategy being debated. German researchers are planning to introduce immunity certificates which theoretically could be used to identify some of these individuals. So far these conversations about testing have focused only on voluntary and not mandatory testing for the virus thus not implicating privacy concerns, at least insofar as the testing results are used only for diagnosing and treating the individuals tested.

Artificial intelligence solutions

Artificial intelligence technologies are being used in varied ways to combat the pandemic. For example, AI has been used to identify and track the spread of the virus. A Canadian company, BlueDot was among the first in the world to identify the emerging risk from COVID-19 in Hubei province and to publish a first scientific paper on COVID-19, accurately predicting its global spread using its proprietary models. AI technologies such as chatbots are being used as virtual assistants to provide information about the virus. AI is also been used to help diagnose the disease including via the use of diagnostic robots, to predict which patients will likely develop severe symptoms requiring treatment, to develop drugs, and find cures including through literature searches for clues to cures buried in heaps of scientific literature. Data-mining operations have been conducted on large datasets to build predictive computer models to provide real-time information about health services, showing where demand is rising and where critical equipment needs to be deployed. AI has also found uses to monitor for crowd formations to help enforce social distancing rules. Some of these uses raise privacy compliance issues as they involve, amongst other things, the collection, use, aggregation, analysis and disclosure to third parties of datasets that may or may not include de-identified or re-identifiable data.

Other uses of AI for tracking and public surveillance purposes also raise privacy compliance issues and, depending on who is conducting these activities and the purposes, issues under the Canadian Charter of Rights and Freedoms. Tracking and surveillance such as using location data stored on or generated by smartphone use, scanning public spaces for people potentially affected using fever detecting infrared cameras, facial recognition and other computer vision surveillance technologies, are examples.

Contact tracing solutions

A solution that is increasingly being relied upon is COVID-19 contact tracing. Public Health Ontario defined contact tracing in an online notice linking to a Government of Canada website portal soliciting volunteers for the National COVID-19 Volunteer Recruitment Campaign as a process that is used to identify, educate and monitor individuals who have had close contact with someone who is infected with a virus. These individuals are at a higher risk of becoming infected and sharing the virus with others. Contact tracing can help the individuals understand their risk and limit further spread of the virus.

Contact tracing as an epidemic control measure is not new. It is infectious disease control 101, often deployed against other illnesses such as measles, SARs, typhoid, meningococcal disease and sexually transmitted infections like AIDS. The use of smartphone technologies and various other technologies to help identify and trace individuals with various diseases has also either been proposed in connection with other diseases such as Ebola.

Contact tracing using location tracking capabilities to combat COVID-19 has already been implemented in other countries such as South Korea and Taiwan. It as also been deployed in China using a plugin App to the ubiquitous WeChat and Alipay Apps. The use was not compulsory, but was compulsory to move between certain areas and public spaces. A central database collected user data which was analyzed using AI tools.

Singapore deployed its TraceTogether mobile application to enable community-driven contact tracing where participating devices exchange proximity information whenever an app detects another device with the TraceTogether app installed. It uses Bluetooth Relative Signal Strength Indicator (RSSI) readings between devices across time to approximate the proximity and duration of an encounter between two users. This proximity and duration information is stored in an encrypted form on a persons phone for 21 days on a rolling basis. No location data is collected. If a person unfortunately falls ill with COVID-19, the Ministry of Health (MOH) would work with the individual to map out 14 days worth of activity, for contact tracing. And if the person has the TraceTogether app installed, he/she is required by law to assist in the activity mapping of his/her movements and interactions and may be asked to produce any document or record in his/her possession including data stored by any apps in the persons phone.

The European Data Protection Supervisor (EDPS) has also called for a pan-European mobile app to track the spread of the in EU countries.

It may not be realistically possible to stem the COVID-19 virus and return to a semblance of normalcy without using a sophisticated contact tracing technology. It would take an army of coronavirus trackers to attempt to curb the spread of the disease using traditional contact tracing techniques. Further, even if contact tracing technologies would not replace humans, they could speed up the process of tracking down possibly infected contacts and play a vital role in controlling the epidemic. A research article published in Science concluded:

" that viral spread is too fast to be contained by manual contact tracing, but could be controlled if this process was faster, more efficient and happened at scale. A contact-tracing App which builds a memory of proximity contacts and immediately notifies contacts of positive cases can achieve epidemic control if used by enough people. By targeting recommendations to only those at risk, epidemics could be contained without need for mass quarantines (lock-downs) that are harmful to society. "

Organizations, recognizing the challenges in combatting the pandemic, have started to propose privacy-sensitive mobile phone based contact tracing solutions that could potentially be used in Canada. MIT researchers, for example, are developing a system that augments manual contact tracing by public health officials, while purporting to preserve the privacy of individuals. The system relies on short-range Bluetooth signals emitted from peoples smartphones. These signals represent random strings of numbers, likened to chirps that other nearby smartphones can remember hearing. If a person tests positive, he/she can upload the list of chirps the persons phone has put out in the past 14 days to a database. Other people can then scan the database to see if any of those chirps match the ones picked up by their phones. If theres a match, a notification will inform that person that they may have been exposed to the virus, and will include information from public health authorities on next steps to take.

Last week Google and Apple announced they are jointly launching a comprehensive solution that includes application programming interfaces (APIs) and operating system-level technology to assist in enabling contact tracing while reportedly maintaining strong protections for user privacy. In May, both companies plan to release APIs that will enable interoperability between Android and iOS devices using apps from public health authorities. These official apps will be available for users to download via their respective app stores. Later, Apple and Google will work to enable a broader Bluetooth-based contact tracing platform by building this functionality into the underlying platforms that would allow more individuals to participate, if they choose to opt in, as well as enable interaction with a broader ecosystem of apps and government health authorities. According to Apple and Google Privacy, transparency, and consent are of utmost importance in this effort, and we look forward to building this functionality in consultation with interested stakeholders. We will openly publish information about our work for others to analyze.

A diagram of how the Apple/Google solution is intended to work is shown below.

As part of the partnership, Google and Apple released draft technical documentation including information on how user privacy will be maintained in their Bluetooth and cryptography specifications and framework documentation. The privacy enhancing features are described as explicit user consent required, the solution Doesnt collect personally identifiable information or user location data, people youve been in contact with never leave your phone, People who test positive are not identified to other users, Google or Apple, and the app Will only be used for contact tracing by public health authorities for COVID-19 pandemic management.

The UK Government confirmed that the UKs National Health Service (NHS) is also working on a contact tracing system with two technology companies. NHSX, the technological branch of the NHS, has reportedly been working on the software alongside Apple and Google. Experts in clinical safety and digital ethics are also involved. Pre-release testing is scheduled for next week. Apple also launched COVID-19 screening tools built in collaboration with the U.S. Centers for Disease Control and Prevention (CDC), Federal Emergency Management Agency (FEMA), and the White House. It promises that the tools include strong privacy and security protections and that Apple will never sell the data it collects.

It is unclear what technological contact tracing technologies the governments of Canada, the provinces or organizations operating in Canada will deploy. However, as contact tracing solutions using mobile phone technologies all involve at least some collection, use, and disclosure of personal data, their adoption will necessarily be influenced by a variety of factors including who implements the solutions e.g. governments health authorities and/or private organizations, and whether the operators are subject to privacy laws, or are given any special immunities from liability under emergency orders.

Privacy law issues

Canada has a myriad of federal and provincial laws across the country that could apply to any proposed contact tracing solution. Much would depend on the public or private entities, or combinations of organizations, that would be involved.

Federally, the Privacy Act applies to departments and ministries of the Government of Canada. This legislation includes provisions that regulates the uses and disclosures of personal information under the control of the government institution. The Privacy Act applies to Health Canada. (Health Canada also regulates medical devices under the Food and Drugs Act. Consideration may need to be given as to whether a contract tracing system which can include software (SaMd) and medical device data systems (MDDS) requires Health Canada approval.) Canadas comprehensive privacy legislation PIPEDA could also be implicated if, for example, personal information is collected, used or disclosed by an organization in the course of commercial activities.

There are also a myriad of provincial laws that could apply. There are comprehensive privacy regimes in Quebec, Alberta, and British Columbia and health privacy laws such as those in the provinces of Ontario, New Brunswick, Newfoundland and Labrador and Nova Scotia. There are also privacy statutes that apply to provincial institutions. For example, in Ontario the Personal Health Information Protection Act (PHIPA) applies to health information custodians that include physicians, hospitals, and medical officers of health. The Municipal Freedom of Information and Protection of Privacy Act (MFIPPA) applies to various institutions including municipalities and boards of health. There are statutory or common invasion of privacy laws across the country.

While there are some similarities between privacy laws across the country, there are also key differences. This includes differences in the standards for obtaining consents from individuals and the types of exemptions federal and provincial authorities and private organizations might look for. There is not, for example, a common framework like there is in the European Union under the GDPR which contains specific exemptions for processing data including when processing is necessary for reasons of substantial public interest and specific exemptions for health data. (This is one area that may be ripe for reform in Canada.)

There are numerous privacy considerations that could be taken into account in evaluating the adoption of technologies to tackle the COVID-19 epidemic. As for contact tracing technologies, the factors may include the architecture and protocols used by the solution, who has access to any data including public authorities and for what purposes, whether the use of the solution is voluntarily or mandatory, whether the data is encrypted, whether users are anonymous, what is revealed by infected users to individuals they come into contact with, whether the system can by exploited by external parties, and how reliable and secure the system is.

Concluding remarks

All Canadians must certainly share a common goal of overcoming this pandemic. Until a vaccine is publicly available, measures to resume at least some of the economic and other activities that have been shut down will need to be considered. It seems likely that innovative new technologies such as artificial intelligence and contact tracing technologies could be deployed to foster this.

Artificial intelligence and contact tracing tools will not be the panacea that alone will solve this crisis. Artificial intelligence can be helpful, but one has to be cautious about evaluating over hyped claims about what AI can achieve and whether AI firms have the data and expertise to deliver on their promises. Experience with contact tracing such as in Singapore has shown shortcomings including the potential for not flagging cases where the virus has spread and producing false positives. Moreover, we wont be able to re-open the country without much more including widespread testing programs.

Privacy laws should not impede uses of technologies that can help ameliorate this emergency situation and which maintain an appropriate balance of privacy interests. Privacy laws in Canada have always recognized the need for balancing of interests. Privacy, as a moral or legal principle, does not trump all other laws or interests.

Ethical arguments for using mobile phone based contact tracing in privacy sensitive ways were cogently expressly by the University of Oxford researchers of the Science research article referred to above:

" Successful and appropriate use of the App relies on it commanding well-founded public trust and confidence. This applies to the use of the App itself and of the data gathered. There are strong, well-established ethical arguments recognizing the importance of achieving health benefits and avoiding harm. These arguments are particularly strong in the context of an epidemic with the potential for loss of life on the scale possible with COVID-19. Requirements for the intervention to be ethical and capable of commanding the trust of the public are likely to comprise the following. i. Oversight by an inclusive and transparent advisory board, which includes members of the public. ii. The agreement and publication of ethical principles by which the intervention will be guided. iii. Guarantees of equity of access and treatment. iv. The use of a transparent and auditable algorithm. v. Integrating evaluation and research in the intervention to inform the effective management of future major outbreaks. vi. Careful oversight of and effective protections around the uses of data. vii. The sharing of knowledge with other countries, especially low- and middle-income countries. viii. Ensuring that the intervention involves the minimum imposition possible and that decisions in policy and practice are guided by three moral values: equal moral respect, fairness, and the importance of reducing suffering. "

Some have argued that abridgements of privacy and democratic rights even in emergency situations create risks that measures may become permanent or be hard to reverse. However, in a thoughtful article recently published in the MIT Technology Review by Genevieve Bell, the director of the Autonomy, Agency, and Assurance Institute at the Australian National University and a senior fellow at Intel, the author concludes that the present circumstances justify a response to this pandemic that should be subject to a sunset clause.

" The speed of the virus and the response it demands shouldnt seduce us into thinking we need to build solutions that last forever. Theres a strong argument that much of what we build for this pandemic should have a sunset clausein particular when it comes to the private, intimate, and community data we might collect. The decisions we make to opt in to data collection and analysis now might not resemble the decisions we would make at other times. Creating frameworks that allow a change in values and trade-off calculations feels important too.There will be many answers and many solutions, and none will be easy. We will trial solutions here at the ANU, and I know others will do the same. We will need to work out technical arrangements, update regulations, and even modify some of our long-standing institutions and habits. And perhaps one day, not too long from now, we might be able to meet in public, in a large gathering, and share what we have learned, and what we still need to get rightfor treating this pandemic, but also for building just, equitable, and fair societies with no judas holes in sight. "

First published @ barrysookman.com. This update is part of our continuing efforts to keep you informed about COVID-19. Follow our COVID-19 hub for the latest updates and considerations for your business.

Originally posted here:
COVID-19 and privacy: artificial intelligence and contact tracing in combatting the pandemic - Lexology

Zoom will let paying customers pick which data center their calls are routed from – The Verge

Zoom will let paying customers pick which data centers calls can be routed through starting April 18th, the company announced in a blog post today. The changes come after a report from the University of Torontos Citizen Lab found that Zoom generated encryption keys for some calls from servers in China, even if none of the people on the call were physically located in the country.

Zoom says paying customers will be able to opt in or out of a specific data center region, though you wont be able to opt out of your default region. Zoom currently groups its data centers into these regions: Australia, Canada, China, Europe, India, Japan/Hong Kong, Latin America, and the US.

Users on the companys free tier cant change their default data center region, though any of those users outside of China wont have their data routed through China, according to Zoom.

On April 3rd, Citizen Lab published its report describing how Zooms encryption scheme sometimes used keys generated by servers in China. That could mean, in theory, that Chinese officials could demand Zoom disclose those encryption keys to the government.

Zoom CEO Eric Yuan said that in the rush to add server capacity to meet the massive need for Zoom during the COVID-19 pandemic, we failed to fully implement our usual geo-fencing best practices and that it was possible that certain meetings were allowed to connect to systems in China. This wasnt the intended behavior and that the company had corrected the issue, according to Yuan.

Yuan announced in an April 1st blog post that Zoom would be implementing a 90-day feature freeze to focus on fixing privacy and security issues. He also said Zoom jumped from 10 million daily users in December all the way up to more than 200 million daily users in March as people flocked to the service while at home due to the pandemic.

Originally posted here:
Zoom will let paying customers pick which data center their calls are routed from - The Verge

How Working Remote And Protecting Encryption Is Natural For This Blockchain Company – Forbes

As most of us look to avoid Zoom Bombings, whether by some hacker with a hoodie on the Web or your dog or cat wanting your attention, the challenges of working from home are perhaps the greatest obstacles that the vast majority of Americans face as we navigate the COVID-19 pandemic. These concepts bring to light the idea of how safe we are on these electronic devices in terms of our privacy, both at a personal level and for corporations and their clients. As the U.S. Senate considers a new piece of legislation called the EARN IT Act, many are concerned the bill would kill end-to-end encryption, an element of technology that allows for private communication. This concern comes at a time when staying at home is the only option.

null

For one company in the blockchain industry, remote working is nothing new - prior to, during, and after COVID-19, all employees at this company have always worked remotely. In speaking with Corey Petty, Chief Security Officer of Status, a company offering an open-source Ethereum-based app that includes a private chat messenger, crypto-wallet and Web 3 browser, I learned some important lessons on how to work from home as an organization and as an individual. Additionally, I was able to understand the importance of end-to-end encryption and the backlash against new legislation in Congress that may force companies to stop using this type of cryptography.

In discussing the keys to success in remote working, Petty commented, It starts with understanding communication within the organization and using the available tooling that are online today...Especially for a company like status, where we are distributed across the globe, time zones become increasingly a part of that communication overhead and dealing with asynchronous communication has to be something that you are used to. Its establishing a digital workplace.

It must be hard if you are used to just asking a friend or colleague to come over and ask a quick question, and Petty notes establishing a digital workplace is really hard to do depending on how a company is set up and can be unique to the individual processes businesses go through. Leadership is key, and Petty notes, Having a very good COO who knows what they are doing and how to communicate is pivotal...[a company] has to have the ability to adapt and change how they operate very quickly or they are not going to be able to survive.

He notes it is important to manage the work-life balance as well and separate yourself from your work and living space. Additionally, organizational time management such as setting up regular meetings with the groups you need to be talking to and using all available videoconferencing applications for that type of thing is critical so that as an individual, you have a better idea of how to organize your time and get work done. However, dont ask Petty to talk to him in a Zoom chat. Based on his expertise in security, this is something that he notes, I would not use Zoom. Petty also does note that with companies like Status, this is really easy because they do not make physical products. Most of what we do is software development or protocol development so the digital aspect of our company is almost 100% whereas a lot of companies who dont have that opportunity need to be creative on who they can send home and who they cant and organize those processes accordingly.

Policy Of Ending End-To-End Encryption Policy In the United States

In terms of surviving, Status as well as other blockchain companies who see encryption as essential not only to their business models, but also on the principles of maintaining anonymity and privacy in a digital workplace, concerns of new legislation in the Senate has them concerned. The EARN IT Act, introduced by the Chair of the Senate Judiciary Committee, Senator Lindsey Graham (R-SC), stands for earning immunity that would end internet platforms such as Facebook or Twitter from having automatic immunity from lawsuits with respect to what is posted on their platform.

The bill makes an exception to the Communications Decency Act, which under Section 230 normally provides immunity, in cases of child sexual abuse, requires a list of best practices to be established by companies that a Commission headed up by the U.S. Attorney General would help oversee the development of.

Many organizations are not taking the proposal lightly and are pushing back. The Electronic Frontier Foundation stated that the EARN IT Act is unconstitutional and violates our First and Fourth Amendment rights. The EFF is urging people to call their Senators to vote No on this legislation.

Petty said he ...sees the exception to Section 230 as an enforcement tool for whatever leverage the EARN IT Act provides, and quite frankly, an underhanded one. It essentially turns a voluntary list of best practices to be mandatory, for operating a tech company in the U.S. without the legal protections of Section 230 is infeasible.

Encryption probably faces its most challenging fight ever and blockchain companies should take heed, because with the Chair and Ranking Member of the Senate Judiciary Committee, along with 10 Co-Sponsors, voting to recommend the bills passage, combined with both the previous President and the current one actually agreeing on a topic, this bill may just be as strong in politics as end-to-end encryption is in technology. As former President Obama noted at a SXSW Conference in 2016, if the government cannot crack encryption, it is like everyone walking around with a Swiss bank account in their pocket.

Obama comments at SXSW in 2016 on encryption

Petty notes encryption is the last bastion of a strong defense and weakening encryption usually comes at the expense of the defender, not the attacker...The process of introducing backdoors and selective access to encryption schemes is not one that is not should be rushed...There is an overwhelming consensus that this is a wrong move to take and its moving in the wrong direction.

Although the verdict on end-to-end encryption is not out yet, one thing does appear certain: that decentralized companies from the blockchain space have a lot to offer in the way of offering protection for company security as well as tips for working from home.

See the original post here:
How Working Remote And Protecting Encryption Is Natural For This Blockchain Company - Forbes

What is homomorphic encryption and how can it help in elections? | Microsoft On The Issues – Microsoft

Confidence in the electoral system is fundamental to a healthy democracy. But when a Gallup poll last year asked people if they had faith in the honesty of elections,59% of Americans said they did not. The only five countries where confidence in elections is lower, according to Gallup, are Lithuania, Turkey, Latvia, Chile and Mexico.

Elections tend to be the point at which most people come into closest contact with their countrys political processes when they cast their vote and have a say in who will represent them in local, regional or national bodies. The Gallup finding, that only 40% of Americans said they are confident in the honesty of elections in the country, relates to a number of factors, the poll says.

From fears of interference in the way an election is run, to failings in the way votes are counted, there is clearly an issue here waiting to be resolved. Data encryption could help to rebuild public trust in democracy by creating a greater sense of connection between the electorate and the results of the elections in which they take part.

[Read More: What is ElectionGuard?]

Using data without losing privacy

Encrypting data is commonplace. Emails, message platforms, e-commerce and online banking are just some of the everyday activities that are made safer and more secure because of it. There is also a role for encryption in helping foster greater trust in the democratic process.

Historically, however, encryption has not been used widely to protect voting data. Thats because data thats been encrypted tends to be static; it isnt possible to do much with static data, other than keep it safe and secure.

But what if it was possible to take that data in its encrypted form and perform calculations and computations without first decrypting it? All the encrypted votes could then be added together, counted, tallied and verified while still in their safe and protected state.

This is one of the things that can be done using what is known as homomorphic encryption.

Josh Benaloh, Senior Cryptographer at Microsoft Research, explains how it works: The key thing is that this can help address the confidence shortfall, he says. With regular encrypted data, all you can do is decrypt it. Its a little like putting something in a safe for transport or safekeeping. Eventually, all youre going to do is take it out.

But homomorphic encryption allows you to compute on encrypted data without the need to decrypt it first.

In a wider context, it would allow an organization to do more than just store encrypted data in the cloud. It would be possible to perform computational tasks on it while keeping it completely secure, getting an encrypted result as the output.

Adding value

Homomorphic encryption offers the ability to perform additions on encrypted data, which unlocks a number of potentially useful scenarios. It becomes possible to review salary data and calculate the average or the mean salary paid to an organizations employees, for example all while keeping the privacy of individual employees and their rates of pay safe and secure.

If you think about what an election is, it all starts with ones and zeros, Benaloh says. One is I selected that option and zero is I didnt select that option. Tallying the election is just adding how many selected one option, how many selected a different option adding all the ones and zeros.

[Read More: Another step in testing ElectionGuard]

Thanks to the homomorphic property, you take all the individual encrypted votes and aggregate them into an encrypted tally, and then you can decrypt to get the separated-out tallies without compromising the privacy of individual votes.

This delivers a full record of how many votes were cast for each candidate while safeguarding the secrecy of the ballot. But it does something else. It makes it possible to offer voters end-to-end verifiability.

All of this was put to the test during the Microsoft ElectionGuard pilot in Fulton, Wisconsin in February 2020. The ElectionGuard software encrypted each voters choice before generating a ballot paper and tracking number for them. Voters received a unique code as part of their encrypted ballot, which enabled them to access a post-election verification platform. That platform would read the encrypted code and confirm that the vote associated with it was cast in a particular way.

Demonstrating to an individual voter that their vote is secure and their identity protected is clearly a necessary part of maintaining election confidence. If there were ever any doubts over those two factors, people would be forgiven for losing trust in the democratic process.

Homomorphic encryption now offers an undeniable way of verifying the accuracy of each vote cast, too. This may not be the silver bullet that restores faith in the electoral process, but it is an important part of demonstrating to people the robustness of the system to which they entrust their democratic right.

For more on Microsofts Defending Democracy Program, visit On the Issues. And follow @MSFTIssues on Twitter.

Go here to see the original:
What is homomorphic encryption and how can it help in elections? | Microsoft On The Issues - Microsoft

The water department mess continues in Jyvskyl: City of the encryption information of the consultant the additional fee, accusing the publication of…

the Jyvskyl city board decided on Tuesday that the center of the city authorized Aila Paloniemi and Kirsi Knuuttila are weakened confidentiality in the preparation of matters. The decision relates to the fire cape and Knuuttila in January published opinion writing (you move to another service) (central finland), which dealt with the Alva share related to the sale of previously unpublished information.

writing in his fire cape and Knuuttila reported that the sale of the related report of the consulting agency KPMG will get the success fee, i.e. the additional compensation, if the city ended up selling a minority stake in the Alva company.

About the content of the agreement and the success fee had been told by the only board members. Council members and parishioners of the data was encrypted.

the board of governors seated in Knuuttila told Yle that he had received information verbally from a reliable source in mid-January. After that, he said that they had contacted the city government and the preparatory officials, in order to confidentiality, the information should be public.

Board members did not see the success fee of the problem and not left to decrypt. I was told that I can appeal to the administrative court and try to decode it through, but there of the decision take half a year. We decided to publish the opinion in writing of because we experienced, that we have no other options, Knuuttila said.

Legal grounds for sanctions not found

the city government tiistaisen decision according to the information success fee and other commission contract content is confidential information that should not have been leaked to the public.

the Decision is justified by the fact that the data contract for content had been given to board members in confidence, and procedure safeguards the preparatory officials and city board members between the open preparation.

the city government appealed the decision in the openness act and stated that freedom of expression does not go to data secrecy regulations over.

a legal basis for the police report for the fire cape and Knuuttila activity is not, however, been found. Authorized therefore does not follow the sanctions.

fire cape to the decision puzzling.

This decision is based on just the governments opinion, that we have undermined confidence. I would like to ask whether the government have thought of, how much it will undermine local peoples confidence in the decision-making by hiding these things, she said.

Koivisto: trust in the consultants professional pride

the mayor Timo Koivisto comment to Yle in brief. According to him, the city governments decision is unequivocal and based on the law.

night school is a legal procedure, which shall be confidential. If anyone decides to leak those things to the public, the trust suffers from.

Kirsi knuuttila, according to the processing stage, the information often is confidential, but at the stage when the decision has already been made, secrecy usually falls.

He likes the peculiar and worrying, that the city has wanted to conceal the use of money and decision-making process impact analysis related information.

Im worried about the citys transparency in decision-making. If the survey enterprise is financially advantageous for the fact that the outcome is certain, how the survey information can be considered reliable? Knuuttila asks.

Koivisto does not see the problem. He said he was confident that the consultants skills and professional pride.

Is a peculiar idea, that the consultant could guide the city decision making or that we would notice, if someone makes a purpose-oriented calculations. In addition, the success fee is the usual procedure, he said.

koivisto, according to the decision making process has been sufficiently transparent, since the subject has been informed in a coherent and organized two city council workshops.

the Actual decision-making Alva a minority stake sale would happen at the earliest in June 2020. By the information commission agreement on the content of the success fees who should Koivisto, according to told the council and possibly also the public.

So far we havent seen it yet topical, the mayor said.

Researcher: confidentiality may apply only to day school in the

Legislation in the light is not at all unambiguous, that the fire cape and Knuuttila had acted wrong when he told the information to the public. City government day schools confidentiality is binding only at the school in the presence of, point out the public law associate professor Riku Neuvonen .

If the question is not the issue, the propagation of which is otherwise criminalized, outsiders are not bound by the same confidentiality or criminal liability. Thus, if person X is in a meeting and told the meeting the contents of Y, not Y, not necessarily bound to secrecy and confidentiality. This is what the founders of practically all the media I get the most leaks, Neuvonen says.

fire cape and knuuttila in the case of the delegates are also not received information from the supervisor position. Data should, therefore, be pointedly given to anyone.

Data belong to authorized and for the people

the Case is not Neuvonen according to the one-track also freedom of expression point of view.

Here we are close to it, that freedom of speech would pass the criterion of secrecy. It is a locally important issue, which is subject to a lot of local political and even national interest. The fact that the consultants receive fees of city property to sell is the thing that yes, I authorized and even community residents, Neuvonen says.

in Addition, it is already a contract and a situation where consultants, for example, is to be tendered, in which case publication would be the effect on corporate activities. Neuvonen, it is hard to see justification for why the information could not be published.

for Example, the patient data are the researcher according to the degree of secrecy in terms of clear things. Instead, the company secrets are a bit more obscure than the area.

Especially if you try to secret concerns already made to the mandate agreement. This is, therefore, to remember it, that the authorities own view of secrecy is possible to dispute in court. The document can be partially kept secret, i.e. it can also be a point that could be public.

Alvas minority stake sale to a related processing was interrupted in February, when concerns about water privatization rose strongly in the public debate. As a consequence also the consultant company KPMG investigation was suspended.

the Latest news on your phone download Yle.en-application space Yle newsletters!

Get Overeating the best content straight to your inbox! Order as many letters as you want!

Proceed to order

Read more:
The water department mess continues in Jyvskyl: City of the encryption information of the consultant the additional fee, accusing the publication of...

Signal Speaks Out About The Evils Of The EARN IT Act – Techdirt

from the speak-out,-in-encrypted-fashion dept

Signal, the end-to-end encrypted app maker, doesn't really need Section 230 of the Communications Decency Act. It can't see what everyone's saying via its offering anyway, so there's little in the way of moderation to do. But, still, it's good to see it come out with a strong condemnation of the EARN IT Act, which as been put forth by Senators Lindsey Graham, Richard Blumenthal, Dianne Feinstein, and Josh Hawley as a way to undermine both Section 230 of the CDA and end-to-end encryption in the same bill. The idea is to effectively use one as a wedge against the other. Under the bill, companies will have to "earn" their 230 protections, by putting in place a bunch of recommended "best practices" which can be effectively put in place by the US Attorney General -- the current holder of which, Bill Barr, has made clear that he hates end-to-end encryption and thinks its a shame the DOJ can't spy on everyone. And this isn't just this administration. Law enforcement officials, such as James Comey under Obama, were pushing this ridiculous line of thinking as well.

To be clear, the EARN IT Act might not have a huge direct impact on a company like Signal -- since it doesn't really much rely on 230 protections (though it might at the margins). But it's good to see that it recognizes what a terrible threat the EARN IT Act would be:

It is as though the Big Bad Wolf, after years of unsuccessfully trying to blow the brick house down, has instead introduced a legal framework that allows him to hold the three little pigs criminally responsible for being delicious and destroy the house anyway. When he is asked about this behavior, the Big Bad Wolf can credibly claim that nothing in the bill mentions huffing or puffing or the application of forceful breath to a brick-based domicile at all, but the end goal is still pretty clear to any outside observer.

However as Signal makes clear, getting rid of end-to-end encryption is much more likely to harm everyone, without providing much help to law enforcement in the first place:

Bad people will always be motivated to go the extra mile to do bad things. If easy-to-use software like Signal somehow became inaccessible, the security of millions of Americans (including elected officials and members of the armed forces) would be negatively affected. Meanwhile, criminals would just continue to use widely available (but less convenient) software to jump through hoops and keep having encrypted conversations.

There is still time to make your voice heard. We encourage US citizens to reach out to their elected officials and express their opposition to the EARN IT bill. You can find contact information for your representatives using The Electronic Frontier Foundations Action Center.

Stay safe. Stay inside. Stay encrypted.

Filed Under: communications, earn it, encryption, intermediary liability, secrecy, section 230Companies: signal

View original post here:
Signal Speaks Out About The Evils Of The EARN IT Act - Techdirt

Aspects of cybersecurity not to overlook when working from home – Big Think

Due to the novel coronavirus situation, billions of people are currently working remotely, many for the first time in their lives. It could be out of personal fears of infection, in obedience of local social distancing regulations, or in accordance with company-wide policies, but the end result is an unexpected shift from the norm of working in the office to working from home (WFH).

Managing a workforce that has been suddenly transformed into a remote one is challenging on many levels, not least because of the need to maintain cybersecurity standards. Prior to the COVID-19 outbreak, many enterprises had yet to contemplate a mass work-from-home scenario, and they therefore lack the policies, devices, or processes to support it securely.

What's more, in recent weeks, companies have been scrambling to preserve their security profiles in the face of an uptick in malicious actors seizing the opportunity to hack corporate systems. That's the bad news. The good news is that you're not powerless. There are practical steps you can take to safeguard confidentiality and cybersecurity with a WFH workforce.

Here are a few of the basics.

Photo by Dan Nelson on Unsplash

A VPN (Virtual Private Network) is the first and most obvious way to secure your organization when employees are logging in from home. When people work from home, they use public internet or weakly-secured WiFi connections to access confidential data in your central database. They also share sensitive files, offering a golden opportunity for hackers to intercept data mid-stream.

A VPN uses strong encryption to create a "tunnel" for any interactions between your employees, and between your employees and your secure corporate network.

Atlas VPN, one of the biggest VPN providers, reports that VPN use has surged in areas with high numbers of coronavirus cases, such as Italy and Spain.

Ignorance can be your biggest danger. If you're used to dealing with a secure internal network, you won't always know where your vulnerabilities and weaknesses lie when it comes to remote access.

This kind of blindness can lead quickly to data breaches that you might not even be aware of until months after the event.

To resolve this issue, use tools like Cymulate's breach and attack simulation platform, which runs simulated attacks across remote connections to assess your cybersecurity risk levels. This can help you determine the extent to which your settings, defenses, policies, and processes are effective, and where you need to make changes in order to maintain a secure organization.

Photo by Mimi Thian on Unsplash

Employees are vital to your success, but they can also cause your downfall. According to security experts at Kaspersky, 52 percent of businesses acknowledge that human error is their biggest security weakness. What's more, some 46 percent of cybersecurity incidents in 2019 were at least partially caused by careless employees.

Employees can cause data breaches in multiple ways, like failing to use a secure connection to download confidential data, forgetting to lock their screens when working in a public place, or falling for phishing emails that install malware on their devices. In addition, your employees might be the first to know about a security breach but choose to hide it out of fear of repercussions, making a bad situation worse.

It's vital to invest time and energy in employee training to ensure that everybody knows how to reduce the risk of successful hacking attacks and is not afraid to report security incidents as soon as they occur. Frequent reminders, online refresher courses, and pop-up prompts help employees take security seriously.

Access controls are a vital layer of security around your network. Losing track of who can access which platforms, data and tools means losing control of your security, and that can be disastrous.

Even in "normal" times, 70 percent of enterprises overlook issues surrounding privileged user accounts, which form unseen entrances to your organization. As the WFH situation drags on, it's even more likely that access controls will lag, opening up holes in your perimeter.

In response, use role-based access control (RBAC) to allow access to specific users based on their responsibilities and authority levels in the organization. By monitoring and strategically restricting access controls, you can further reduce the risk that human error might undermine your careful cybersecurity arrangements.

Because most companies were not yet set up for remote work when the COVID-19 crisis hit, the lion's share of devices used to connect from new home offices are not owned or configured by employers.

And with employees more likely to use their own computers when working from home, endpoint attacks become even more serious. SentinelOne, an endpoint security platform, reported a 433 percent rise in endpoint attacks from late February to mid-March.

Although it can seem difficult to secure endpoints when employees are working remotely, it is possible. SentryBay's endpoint application encryption solution takes a different approach, securing apps in their own "wrappers," as opposed to working on a device security level.

Finally, weak passwords are a known gift for hackers. The problem only grows when employees work from home, as the contextual shift makes it easier for them to ignore reminders from your security team. They are also more likely to share or save credentials for faster remote access when it takes time to get a response from a newly remote security team.

If you don't already use a password manager to force employees to generate strong passwords and avoid sharing or saving credentials, now is the time to begin. CyberArk Enterprise Password Vault requires users to update passwords regularly, enforces multi-factor authentication (MFA) to reduce the chances of hackers entering your network through stolen passwords, and provides auditing and control features so you can track when someone uses or misuses an account.

Consumer password managers like LastPass and 1Password likewise offer business tiers with similar features.

With enterprises unprepared for mass remote working, industries worldwide could face a security nightmare. However, applying best security practices and using advanced tools to test for vulnerabilities, supervise access controls and password management, secure connections, and apply endpoint encryption can go a long way.

Make sure your employees know your security policies will help harden your attack surface, improve your cybersecurity posture, and prevent COVID-19 from causing a cybersecurity plague.

From Your Site Articles

Related Articles Around the Web

Read the rest here:
Aspects of cybersecurity not to overlook when working from home - Big Think

Covid and Crime: Upping the Fight against Global Financial Crime in the Time of Corona – PaymentsJournal

Crisis and the uncertainty andpanic that accompany it often opens doors to criminality, inviting bad actorsto prey upon our fears and anxieties. The global pandemic has unfortunatelyprovided such an opportunity, unprecedented in modern times: allowing hackersand scammers to take advantage of distracted governments and law enforcementagencies and of the disruption to increasingly anxious citizens routines tocarry out financial theft and money-laundering schemes.

Interpol has even issued an official warning over fraud schemes linked to COVID-19, detailing some 30 fraud types ranging from phishing attempts to phony sales calls. To make matters worse, our disrupted routines pose a serious challenge to fraud detection tools utilized by banks that analyze patterns in payment and money movement, making it much harder to detect truly suspicious behavior within a sea of false positives.

Financial crime was already amajor threat to the worlds economy long before the current health crisis. TheUN estimates that $1.7 trillion is laundered globally every year. Despite thevast sums that banks and financial authorities spend on tracking and combatingmoney laundering, only 1% of laundered funds are actually identified andseized.

Financial experts and regulatorsagree that one of the main reasons why enormous sums of money are being stolenand laundered each year is the lack of information sharing amongst the relevantbodies, leaving each institution with blind spots. And with fraudstersemboldened by the current crisis, the need for global inter-bank cooperation tothwart such widespread financial crime is greater than ever.

However, as great as the need is for inter-bank cooperation, banks in different countries and under different jurisdictions cannot collaborate effectively if they lack the ability to exchange data. Tightening data privacy regulations like the EUs General Data Protection Regulation (GDPR) and existing financial industry regulations on sharing pre-suspicious or suspicious information have obstructed banks efforts to run collaborative operations and leverage collective intelligence. Indeed, consumers, enterprises and governments justifiably fear the consequences of sharing individuals account and transaction data, regardless of the legitimacy of banks motivations.

The result: In the face of globalnetworks of financial criminals and money launderers, financial institutionsare effectively hamstrung, left to wage their fight on their own wheninformation sharing could provide them a true upper hand.

Fuelled by recent advances inPrivacy-Enhancing Technologies (PETs), financial crime experts and datascientists are leading groundbreakingresearch to devise solutions that can enable vital collaboration in the fightagainst financial crime, while simultaneously adhering to growing data privacyregulations. Homomorphic Encryption is one of these novel PETs, enablingorganizations to collaborate on and analyze data while it remains encrypted and thus protected from third-partyaccess that regulators and citizens alike so fear.

These innovative productsdesigned to help banks and financial authorities share data securely andefficiently are becoming market-ready. So, for example, to prevent fraudulentpayments, banks can deploy encrypted queries against each others databases,asking questions about suspicious accounts and transactions without everrevealing the contents of these queries as they remain encrypted throughout theinvestigative process. The outcome of these queries is actionable insightsthat will enable banks to weed out falsepositives and to focus their efforts on highly suspicious actors, increasingthe effectiveness of their investigations.

While manual information-sharingprocesses do currently exist such as the one authorized under section 314(b) of the USA Patriot Act,collaborative solutions based on PETs allow for more efficient, large-scale,automated information exchange, enabling effective, joint investigations basedon bilateral or multilateral collaborations. Such solutions also foster theestablishment of consortiums between banks and law enforcement such as the UKsCyber Defence Consortium (CDA), an early adopter of collaborative investigationmethods based on PETs.

Effective, regulation-compliant solutions for fighting widespread internationalfinancial crime are available now, and must be deployed in order to fight thisunfortunate side effect of the current pandemic. In todays volatile economicclimate, banks have an essential role to play in stemming the flow of this growing globalfinancial scourge and preventing fraud and financial crime from furtherdestabilizing global markets.

Summary

Article Name

Covid and Crime: Upping the Fight against Global Financial Crime in the Time of Corona

Description

Fuelled by recent advances in Privacy-Enhancing Technologies (PETs), financial crime experts and data scientists are leading groundbreaking research to devise solutions that can enable vital collaboration in the fight against financial crime, while simultaneously adhering to growing data privacy regulations.

Author

Dr. Alon Kaufman

Publisher Name

PamentsJournal

Publisher Logo

Read the original here:
Covid and Crime: Upping the Fight against Global Financial Crime in the Time of Corona - PaymentsJournal