The U.S. government’s been trying to stop encryption for 25 years. Will it win this time? – Tom’s Guide

SAN FRANCISCO In the age of mass digital surveillance, how private should your data and communications be? That question lies at the heart of the encryption panel that kicked off the Enigma Conference here yesterday (Jan. 27).

Four cryptography experts discussed the origins of the first "Crypto Wars" in the 1990s, the state of the current Crypto Wars between the government and technology companies two weeks ago, the U.S. attorney general called out Apple for not unlocking a terror suspect's iPhones and what's at stake now for consumers, companies and governments.

"It is a basic human right for two people to talk confidentially no matter where they are. This is sacrosanct," said Jon Callas, senior technologist at the American Civil Liberties Union (ACLU) and a veteran of the fight between the U.S. government and tech companies over the use of cryptography to protect digital communications in the 1990s.

It may be a human right, but most countries have not enshrined confidential conversations in their legal codes. What started as a resurgent fight against government surveillance in the wake of the documents leaked by Edward Snowden in 2013 has now bloomed into a larger struggle over who gets to encrypt communications and data.

In Snowdens wake, end-to-end encrypted messaging has become far more accessible, while Apple and Google have introduced on-device encrypted data storage by default. But access to those services could soon depend on which country you are in and whose digital services you're using.

The 1990s Crypto Wars centered on the Clipper Chip, a hardware chip designed to protect phone users calls from surveillance unless the government wanted to listen in. It was a "backdoor" that was going to be built into every cellphone.

But in 1994, cryptographer Matt Blaze, one of the panelists at yesterday's Enigma Conference talk, exposed security vulnerabilities in the Clipper Chip. Experts spent the next three years finding even more vulnerabilities in the Clipper Chip and fighting in court to prevent its inclusion in devices.

Since the commercial internet was in its infancy at the time, legal and computer security experts had to take on faith that the World Wide Web would eventually be important, Blaze said. With the publication in 1997 of a report on the risks of key recovery that Blaze co-authored, most U.S. federal agencies stopped fighting against the cryptographers.

"The FBI became the only organization arguing that computer security was too good," Blaze said.

Today, government access to encrypted communications through a mandated backdoor is not the law of the land in any single country. But laws requiring varying degrees of government access to encrypted communications are becoming more common, said panelist Riana Pfefferkorn, associate director of surveillance and cybersecurity at the Stanford Law School Center for Internet and Society.

Following the panel discussion, Pfefferkorn said she sees a growing trend, especially in the United States and India, to tie serious liability issues, in both criminal and civil law, to the encryption debate.

"In the U.S., it's child pornography. In India, it's the threat of mob violence," Pfefferkorn said. "They seem like two separate issues, but they're a way of encouraging the regulation of encryption without regulating encryption.

"They're going to induce providers to stop deploying end-to-end encryption lest they face ruinous litigation," she added. "It feels like a bait-and-switch."

Daniel Weitzner, the founding director of the Internet Policy Research Initiative at the Massachusetts Institute of Technology, noted during the panel that India's proposed changes to its intermediary liability law would make internet communications providers ("intermediaries") legally responsible for the actions and speech of their users.

He said India's proposals are similar to changes demanded by U.S. senators, including the EARN IT Act of 2019 authored by Senators Lindsey Graham (R-South Carolina) and Richard Blumenthal (D-Connecticut). Weitzner added that there are other countries with even tougher tech-liability laws on the books.

The United Kingdom passed the Investigative Powers Act in 2016, also known as the Snoopers' Charter. It lets the British government issue statutorily vague Technical Capacity Notices that let it mandate encryption backdoors or otherwise force companies to stop using end-to-end encryption. There's no requirement that the British government has to ever reveal the results of the evaluation process guiding the issuance of the notices.

Australia's Assistance and Access Bill from 2018 is similar, except that it specifically bans the introduction of systemic vulnerabilities into the product in question. What's not clear is another question raised by the legal mandate: Whats the difference between a technical vulnerability and a legally-mandated software backdoor?

As technology itself has grown more complicated and nuanced since the 1990s, so has the burden of responsibility facing its advocates. Proposals to change encryption should be tested "multiple times" strategically and technically, argued the Carnegie Encryption Working Group in September 2019.

And Susan Landau and Denis McDonough said in a column for The Hill that it would be wiser for the tech community to find common ground with governments over data at rest, such as data stored on a locked iPhone, instead of the more contentious data in transit embodied by end-to-end encrypted messaging apps.

Ultimately, the future of the consumer use of encryption is likely to depend heavily on the developers and companies that make it available.

They could split their products, offering different levels of encryption for different countries and regions, as Netscape did in the 1990s, said Pfefferkorn. Or they could refuse to offer encrypted products in countries or regions that demand weaker encryption or backdoor access.

"Or," Pfefferkorn said, "it could be broken for everyone."

More:
The U.S. government's been trying to stop encryption for 25 years. Will it win this time? - Tom's Guide

Preventing hospital readmissions with the help of artificial intelligence – University of Virginia The Cavalier Daily

The University Health Systems data science team recently advanced to the next stage of a nationwide competition to apply artificial intelligence to hospital readmissions, a persistent and costly issue. Sponsored by the Centers for Medicare and Medicaid Services, the inaugural Artificial Intelligence Health Outcomes Challenge initially received hundreds of applications. CMS chose only 25 submissions, the Universitys among them, to execute their proposed strategies.

A few years ago, in order to significantly reduce unplanned readmissions to the hospital, the University initiated efforts to develop a cutting-edge yet easily accessible solution to this widespread problem. Bommae Kim, senior data scientist for the University Health System, began pursuing remedies for the epidemic of readmissions in 2018.

Usually, after a patient was discharged, they couldnt manage their disease for some reason, so were trying to figure out what that reason is and help, Kim said.

The University Health Systems data science team found that three percent of patients at the University constitute 30 percent of readmissions within the first 30 days following release from the hospital, while the majority of the remaining 70 percent return within a year. After identifying the need to decrease such adverse events, data scientists in the Universitys Health System, such as Jason Adams, turned to artificial intelligence to target key factors that contribute to a patient returning unnecessarily to the hospital.

The purpose is to take this amount of information and in an automated way to tell that a person is at risk and what is the course of action that can best help that patient, Adams said.

Kim acts as project leader alongside a team of data scientists and information technology personnel. Overseen by Jon Michel, director of data science for the University Health System, the researchers produce models that help predict the likelihood of readmission and subsequently provide actionable advice for physicians.

Only a year or so later in 2019, CMS announced a competition to tackle the same challenge. CMS directed participants to employ the computing power of artificial intelligence to construct a model that accurately and efficiently flags patients at risk of returning to the hospital for non-routine treatments. More than 300 applicants submitted proposals during the launch stage of the challenge.

The University was one of only 25 groups selected to advance to the next stage, vying with organizations such as IBM, Deloitte and the Mayo Clinic for the $1 million grand prize and utilization by the CMS Innovation Center to determine payment and service delivery strategies.

Were doing this for our U.Va. patients, but it would be nice to win the competition because then we can deploy our approach at the national level, Kim said. We believe in our approach.

For this phase of the competition, CMS distributed Medicare claims data to the remaining teams. Claims from all across the country provide the opportunity to fine-tune the Universitys model with data outside of the University Health System. According to Application Systems Analyst Programmer Angela Saunders, the supplemental details will prove beneficial for the Universitys models.

Saunders did point out challenges with the millions of rows of data, which require extensive resources to simply store in an environment suitable for manipulation. Furthermore, inconsistencies lingered in the dataset from year to year, requiring the feature engineering team to sift and sort through the tables, standardizing entries and column headers, which detail the traits associated with each claimant.

Its not just a little data, Saunders said. We have exhausted a lot of resources just to get the data to consistency. Each year, things change just a little bit and so just getting it into a consistent format is a lot of the battle.

Based on the teams assessments, much of the feature engineering portion of the project at least the preliminary round of it has been completed. The next step involves transporting data to Rivanna, the Universitys high performance computing system, and fitting predictive models to the data. Data scientist Rupesh Silwal, who helps design and evaluate multiple iterations of the modeling architecture, noted the importance of not only systemizing the entries, but also of ensuring sensitive medical data remains anonymous.

The feature engineering team has cleaned the data, made sure everything makes sense from year to year and that all of the sensitive information is scrubbed so we can move the data to this other computing infrastructure, Silwal said. Part of our effort has been focused on getting the data in there and using it to set up a modeling environment to see if we can make predictions.

Specifics regarding modeling techniques and factors employed in creating the Universitys unique solution could not be revealed at this time, due to the proprietary nature of the ongoing competition. In broad terms, factors such as past utilization of certain hospital services like the Emergency Department or chronic conditions contribute to the initial formulation of the model, as they are indicators of high potential for readmission, data scientist Adis Ljubovic said.

Those are fairly well-known and were using that as the baseline, but we also have the secret sauce ones that are preventable, Ljubovic said.

Other variables intended to capture financial, transportation and lifestyle information for patients also augment the standard determinants of readmission, while electronic medical records from the University provide documentation of trends in the Universitys own health system.

Another distinctive aspect of the Universitys proposal is its commitment to a solution that clinicians accept. Senior data scientist John Ainsworth and Ljubovic, along with other members of the Universitys project, assert that the healthcare industry generally adopts a conservative mindset with regards to artificial intelligence modeling in hospitals. However, the University Health Systems data scientists have consulted with doctors at the University hospital about introducing tools physicians trust and can easily adopt.

Data science techniques bring with them the potential for accuracy, for bringing in and ingesting larger datasets, Ainsworth said. The richness of the data gets recorded and putting up the information in front of clinicians that can help them take meaningful action is what were going for. If we can ... give them some sense of where preventative strategies might lie, that can support them in their goal of caring for patients.

Several members of the team agreed a complex issue like hospital readmission calls for a collective approach. In the University Health Systems data science department, that can be a rare occurrence, several data scientists remarked, as their separate assignments often occupy most of their time. Senior Business Intelligence Developer Manikesh Iruku expressed appreciation for the chance to learn more about data science techniques, and others shared similar experiences when it came to exploring different subfields of data science.

Saunders and data scientist Valentina Baljak emphasized the confidence this collaboration has given the group to tackle new tasks.

Frequently for us, we have our own projects and its a one-person project, Baljak said. Occasionally you collaborate with someone, but I dont really think we had a project that involved all of us at the same time. That has been a great experience.

Currently, competitors are finalizing their project packages to submit to CMS in February. CMS plans to winnow the field down to the seven best proposals by April. Regardless of the outcome, the Universitys team plans to put their results and newly developed models into practice within the Universitys Health System.

In particular for healthcare, in some ways the best is yet to come in the data science world, Ainsworth said. The future is bright for data science in healthcare.

Read the original:
Preventing hospital readmissions with the help of artificial intelligence - University of Virginia The Cavalier Daily

You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It’s Making the World a Weirder Place – Chemistry World

Janelle ShaneWildfire2019 | 272pp | 20ISBN9781472268990

Buy this book on Amazon.co.uk

How many giraffes can you see? Too many to count. This seems unlikely unless you are an artificial intelligence algorithm trained on a dataset in which giraffes were somewhat overrepresented.

As Janelle Shane explains in her book on how artificial intelligence works and why its making the world a weirder place, accidental over-inclusion of giraffes in image collections used to train AIs is exceedingly likely. The presence of a giraffe is so rare and exciting that you are almost certain to take a photo.

An optical physicist, Shane documents such eccentricities of AI on her blog and Twitter feed, which she has now expanded into this delightful book. The title comes from an experiment she ran to see if an AI could generate human-acceptable pick-up lines. The results were, as you can see, mixed.

This is the crux of Shanes highly compelling argument about AI: its danger doesnt come from exceeding intelligence human-like AI remains firmly within the realm of science fiction but from the very weird things that narrowly focused algorithms do. Like the self-driving car algorithm that identified a sideways-on lorry as a road sign, causing a fatal accident. As artificial intelligence becomes ever more deeply embedded in our modern digital lives, it behoves us all to understand it better and know its limitations and failings.

I really loved this book, and, if you like your serious science accompanied by very cute cartoon illustrations, you will too. Shanes explanations are not only laugh-out-loud hilarious but also so accessible. Reading the book moved me to go do my own experiments in AI weirdness, like playing with predictive text on my smartphone and chatting with virtual chat bots about giraffes. I feel much better informed as a result.

This book features in our book club podcast, which you can listen tohere.

See the original post here:
You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It's Making the World a Weirder Place - Chemistry World

Cambridge Science Festival examines the effects and ethics of artificial intelligence – Cambridge Network

Artificial intelligence features heavily as part of a series of events that cover new technologies at the 2020 Cambridge Science Festival (9 22 March), which is run by the University of Cambridge.

Hype around artificial intelligence, big data and machine learning has reached a fever pitch. Drones, driverless cars, films portraying robots that look and think like humans Today, intelligent machines are present in almost all walks of life.

During the first week of the Festival, several events look at how these new technologies are changing us and our world. In AI and society: the thinking machines (9 March). Dr Mateja Jamnik, Department of Computer Science and Technology, considers our future and asks: What exactly is AI? What are the main drivers for the amazing recent progress in AI? How is AI going to affect our lives? And could AI machines become smarter than us? She answers these questions from a scientific perspective and talks about building AI systems that capture some of our informal and intuitive human thinking. Dr Jamnik demonstrates a few applications of this work, presents some opportunities that it opens, and considers the ethical implications of building intelligent technology.

Artificial intelligence has also created a lot of buzz about the future of work. In From policing to fashion: how the use of artificial intelligence is shaping our work (10 March), Alentina Vardanyan, Cambridge Judge Business School, and Lauren Waardenburg, KIN Center for Digital Innovation, Amsterdam, discuss the social and psychological implications of AI, from reshaping the fashion design process to predictive policing.

Speaking ahead of the event, Lauren Waardenburg said: Predictive policing is quite a new phenomenon and gives one of the first examples of real-world data translators, which is quite a new and upcoming type of work that many organisations are interested in. However, there are unintended consequences for work and the use of AI if an organisation doesnt consider the large influence such data translators can have.

Similarly, AI in fashion is also a new phenomenon. The feedback of an AI system changes the way designers and stylists create and how they interpret their creative role in that process. The suggestions from the AI system put constraints on what designers can create. For example, the recommendations may be very specific in suggesting the colour palette, textile, and style of the garment. This level of nuanced guidelines not only limits what they can create, but it also puts pressure on their self-identification as a creative person.

The technology we encounter and use daily changes at a pace that is hard for us to truly take stock of, with every new device release, software update and new social media platform creating ripple effects. In How is tech changing how we work, think and feel? (14 March), a panel of technologists look at current and near-present mainstream technology to better understand how we think and feel about data and communication. With Dr David Stillwell, Lecturer in Big Data Analytics and Quantitative Social Science at Cambridge Judge Business School; Tyler Shores, PhD researcher at the Faculty of Education; Anu Hautalampi, Head of social media for the University of Cambridge; and Dex Torricke-Barton, Director of the Brunswick Group and former speechwriter and communications for Mark Zuckerberg, Elon Musk, Eric Schmidt and United Nations. They discuss some of the data and trends that illustrate the impact tech has upon our personal, social, and emotional lives as well as discussing ways forward and what the near future holds.

Tyler Shores commented: One thing is clear: the challenges that we face that come as a result of technology do not necessarily have solutions via other forms of technology, and there can be tremendous value for all of us in reframing how we think about how and why we use digital technology in the ways that we do.

The second week of the Festival considers the ethical concerns of AI. In Can we regulate the internet? (16 March), Dr Jennifer Cobbe, The Trust & Technology Initiative, Professor John Naughton, Centre for Research in the Arts, Social Sciences and Humanities, and Dr David Erdos, Faculty of Law, ask: How can we combat disinformation online? Should internet platforms be responsible for what happens on their services? Are platforms beyond the reach of the law? Is it too late to regulate the internet? They review current research on internet regulation, as well as ongoing government proposals and EU policy discussions for regulating internet platforms. One argument put forward is that regulating internet platforms is both possible and necessary.

When you think of artificial intelligence, do you get excited about its potential and all the new possibilities? Or rather, do you have concerns about AI and how it will change the world as we know it? In Artificial intelligence, the human brain and neuroethics (18 March), Tom Feilden, BBC Radio 4 and Professor Barbara Sahakian, Department of Psychiatry, discuss the ethical concerns.

In Imaging and vision in the age of artificial intelligence (19 March), Dr Anders Hansen, Department of Applied Mathematics and Theoretical Physics, also examines the ethical concerns surrounding AI. He discusses new developments in AI and demonstrates how systems designed to replace human vision and decision processes can behave in very non-human ways.

Dr Hansen said: AI and humans behave very differently given visual inputs. A human doctor presented with two medical images that, to the human eye are identical, will provide the same diagnosis for both cases. The AI, however, may on the same images give 99.9% confidence that the patient is ill based on one image, whereas on the other image (that looks identical) give 99.9% confidence that the patient is well.

Such examples demonstrate that the reasoning the AI is doing is completely different to the human. The paradox is that when tested on big data sets, the AI is as good as a human doctor when it comes to predicting the correct diagnosis.

Given the non-human behaviour that cannot be explained, is it safe to use AI in automated diagnosis in medicine, and should it be implemented in the healthcare sector? If so, should patients be informed about the non-human behaviour and be able to choose between AI and doctors?

A related event explores the possibilities of creating AI that acts in more human ways. In Developing artificial minds: joint attention and robotics (21 March), Dr Mike Wilby, lecturer in Philosophy at Anglia Ruskin University, describes how we might develop our distinctive suite of social skills in artificial systems to create benign AI.

One of the biggest challenges we face is to ensure that AI is integrated into our lives, such that, in addition to being intelligent and partially autonomous, AI is also transparent, trustworthy, responsive and beneficial, Dr Wilby said.

He believes that the best way to achieve this would be to integrate it into human worlds in a way that mirrors the structure of human development. Humans possess a distinctive suite of social skills that partly explains the uniquely complex and cumulative nature of the societies and cultures we live within. These skills include the capacity for collaborative plans, joint attention, joint action, as well as the learning of norms of behaviour.

Based on recent ideas and developments within Philosophy, AI and Developmental Psychology, Dr Wilby examines how these skills develop in human infants and children and suggests that this gives us an insight into how we might be able to develop benign AI that would be intelligent, collaborative, integrated and benevolent.

Further related events include:

Bookings open on Monday 10 February at 11am.

Image: Artificial_intelligence_by_gerd_altmann

Read the original post:
Cambridge Science Festival examines the effects and ethics of artificial intelligence - Cambridge Network

Artificial Intelligence Could Revolutionize the Study of Jewish Law. Is That a Good Thing? – Mosaic

As early as the 1960s, scholars and technicians began the task of digitizing halakhic literature, making it possible to search quickly through an ever-growing group of texts. Technological advances since then have improved the quality of searches, sped up the pace of digitization, and made such tools accessible to anyone with smartphone. Now, write Moshe Koppel and Avi Shmidman, machine learning and artificial intelligence can do much more: they can make texts penetrable to the lay reader by adding vowel-markings and punctuation while spelling out abbreviations, create critical editions by comparing early editions and manuscripts, and even compose lists of sources on a single topic.

After explaining the vast potential created by these new technologies, Koppel and Shmidman discuss both their benefits and their costs, beginning with the fact that a layperson will soon be able to navigate a textual tradition with an ease previously reserved for the sophisticated scholar:

On the one hand, this [change] is a blessing: it broadens the circle of those participating in one of the defining activities of Judaism, [namely Torah study], including those on the geographic or social periphery of Jewish life. [On the other hand], the traditional process of transmission of Torah from teacher to student and from generation to generation is such that much more than raw text or hard information is transmitted. Subtleties of emphasis and attitudewhich topics are central, what is a legitimate question, who is an authority, what is the appropriate degree of deference to such authorities, which values should be emphasized and which honored only in the breach, when must exceptions be made, and much moreare transmitted as well.

All this could be lost, or at least greatly undervalued, as the transmission process is partially short-circuited by technology; indeed, signs of this phenomenon are already evident with the availability of many Jewish texts on the Internet.

And moving further into the future, what if computer scientists could create a sort of robot rabbi, using the same sort of artificial intelligence that has been used to defeat the greatest chess masters or Jeopardy champions?

[S]uch a tool could very well turn out to be corrosive, and for a number of reasons. First, programs must define raw inputs upfront, and these inputs must be limited to those that are somehow measurable. The difficult-to-measure human elements that a competent[halakhic authority]would take into account would likely be ignored by such programs. Second, the study of halakhah might be reduced from an engaging and immersive experience to a mechanical process with little grip on the soul.

Third, just as habitual use of navigation tools like Waze diminish our navigating skills, habitual use of digital tools for[answering questions of Jewish law]is likely to dry up our halakhic intuitions. In fact, framing halakhah as nothing but a programmable function that maps situations to outputs likedo/dontis likely to reduce it in our minds from an exalted heritage to one arbitrary function among many theoretically possible ones.

Read more at Lehrhaus

More about: Artifical Intelligence, Halakhah, Judaism, Technology

Read more:
Artificial Intelligence Could Revolutionize the Study of Jewish Law. Is That a Good Thing? - Mosaic

Manufacturing Companies Struggling with Artificial Intelligence Implementation – Water Technology Online

While manufacturing companies see the value in implementing artificial intelligence (AI) solutions, many are struggling to deliver clear results and are reevaluating their strategy, according to a new report. The report was commissioned by Plutoshift, a provider of automated performance monitoring for industrial workflows.

The findings revealed that almost two-thirds (61%) of manufacturing companies said they need to reevaluate the way they implement AI projects. The report, titled Breaking Ground on Implementing AI, uncovered that while companies are making progress with their AI initiatives, many planning and implementation struggles remain, from defining realistic outcomes to data collection and maturity to managing budget scope and more.

To gauge the progress and process of how manufacturing companies are implementing AI, and whether or not they are satisfied with their AI initiatives, Plutoshift surveyed 250 manufacturing professionals in October 2019 with visibility into their companys AI programs.

A major reason companies are rethinking their AI implementation plans is a lack of data infrastructure needed to fully use AI. 84% of respondents say their company cannot automatically and continuously act on their data intelligence.

The report uncovered further foundational challenges with successful AI implementation, including that 72% of manufacturing companies said it took more time than anticipated for their company to implement the technical/data collection infrastructure needed to take advantage of the benefits of AI.

Companies are forging ahead with the adoption of AI at an enterprise level, said Prateek Joshi, CEO and founder of Plutoshift. But despite the progress that some companies are making with their AI implementations, the reality thats often underreported is that AI initiatives are loosely defined. Companies in the middle of this transformation usually lack the proper technology and data infrastructure. In the end, these implementations can fail to meet expectations. The insights in this report show us that companies would strongly benefit by taking a more measured and grounded approach towards implementing AI.

Other key findings include:

See the article here:
Manufacturing Companies Struggling with Artificial Intelligence Implementation - Water Technology Online

Latinos, Alzheimer’s and Artificial Intelligence – AL DIA News

Alzheimer's is one of the growing diseases that cause death in the United States. More than 5.8 million Americans currently have the disease. By 2050, nearly 14 million people in the United States over the age of 65 could be living with the disease unless scientists develop new approaches to prevent or cure it.

The limited inclusion of Latinos and African Americans in research will only worsen the outlook, although successful efforts across the country could help us keep up with the disease.

The face of Alzheimer's disease is changing, mainly because the number one risk factor is old age. By 2030, the number of Latinos over 65 will have grown by 224 percent compared to 65 percent among non-Hispanic whites.

Senator Amy Klobuchar, in her 2019 election program, stated that by 2030 Latinos and African Americans will constitute nearly 40% of the 8.4 million Americans living with Alzheimer's.

Much of this research has been conducted by the organization UsAgainstAlzheimer's who claim that studies in the United States focus on less than 4% in communities of color. Overall, only 5% of the reviews included a variant for recruiting underrepresented populations such as Latinas or African Americans. The studies surprisingly overlook the fact that African Americans are two to three times more likely to develop Alzheimer's than non-Hispanic whites, while Latinos are 1.5 times more likely.

Similarly, the growing impact of the disease increases costs in Latino families. For example, the total cost of Alzheimer's disease in the Latino community will reach $2.3 trillion by 2060 if the disease's trajectory continues on its current course.

Artificial Intelligence: A possibility

A team of researchers led by UC Davis Health professor Brittany Dugger received a $3.8 million grant from the National Institute on Aging (NIA) to help define the neuropathology of Alzheimer's disease in Hispanic cohorts. The grant will fund the first large-scale initiative to present a detailed description of the brain manifestations of Alzheimer's disease in people of Mexican, Cuban, Puerto Rican, and Dominican descent.

"There is little information on the pathology of dementia affecting people of minority groups, especially for people of Mexican, Cuban, Puerto Rican, and Dominican descent," Brittany Dugger said in a news release.

The research will include the study of post-mortem brain tissue donated by more than 100 people from a diverse group of the countries mentioned above.

In partnership with Michael Keizer of UC San Francisco, the researchers will use artificial intelligence and machine learning to locate different pathologies in the brain and thus define the neuropathological landscape of Alzheimer's disease.

The study's findings will help develop specific disease profiles for individuals. This profile will establish a basis for precise medical research to obtain the correct treatment for the right patient at the right time. This approach to medicine reduces disease disparities and advances medicine for all communities.

Read more here:
Latinos, Alzheimer's and Artificial Intelligence - AL DIA News

Detailed Analysis and Report on Topological Quantum Computing Market By Microsoft, IBM, Google. – New Day Live

The Topological Quantum Computing market has been changing all over the world and we have been seeing a great growth In the Topological Quantum Computing market and this growth is expected to be huge by 2026 and in this report, we provide a comprehensive valuation of the marketplace. The growth of the market is driven by key factors such as manufacturing activity, risks of the market, acquisitions, new trends, assessment of the new technologies and their implementation. This report covers all of the aspects required to gain a complete understanding of the pre-market conditions, current conditions as well as a well-measured forecast.

The report has been segmented as per the examined essential aspects such as sales, revenue, market size, and other aspects involved posting good growth numbers in the market.

Top Companies are covering This Report:- Microsoft, IBM, Google, D-Wave Systems, Airbus, Raytheon, Intel, Hewlett Packard, Alibaba Quantum Computing Laboratory, IonQ.

Get Sample PDFBrochure@https://www.reportsintellect.com/sample-request/504676

Description:

In this report, we are providing our readers with the most updated data on the Topological Quantum Computing market and as the international markets have been changing very rapidly over the past few years the markets have gotten tougher to get a grasp of and hence our analysts have prepared a detailed report while taking in consideration the history of the market and a very detailed forecast along with the market issues and their solution.

The given report has focused on the key aspects of the markets to ensure maximum benefit and growth potential for our readers and our extensive analysis of the market will help them achieve this much more efficiently. The report has been prepared by using primary as well as secondary analysis in accordance with porters five force analysis which has been a game-changer for many in the Topological Quantum Computing market. The research sources and tools that we use are highly reliable and trustworthy. The report offers effective guidelines and recommendations for players to secure a position of strength in the Topological Quantum Computing market. The newly arrived players in the market can up their growth potential by a great amount and also the current dominators of the market can keep up their dominance for a longer time by the use of our report.

Topological Quantum Computing Market Type Coverage:

SoftwareHardwareService

Topological Quantum Computing Market Application Coverage:

CivilianBusinessEnvironmentalNational SecurityOthers

Market Segment by Regions, regional analysis covers

North America (United States, Canada, Mexico)

Asia-Pacific (China, Japan, Korea, India, Southeast Asia)

South America (Brazil, Argentina, Colombia, etc.)

Europe, Middle East and Africa (Germany, France, UK, Russia and Italy, Saudi Arabia, UAE, Egypt, Nigeria, South Africa)

Discount PDF Brochure @https://www.reportsintellect.com/discount-request/504676

Competition analysis

As the markets have been advancing the competition has increased by manifold and this has completely changed the way the competition is perceived and dealt with and in our report, we have discussed the complete analysis of the competition and how the big players in the Topological Quantum Computing market have been adapting to new techniques and what are the problems that they are facing.

Our report which includes the detailed description of mergers and acquisitions will help you to get a complete idea of the market competition and also give you extensive knowledge on how to excel ahead and grow in the market.

Why us:

Reasons to buy:

About Us:

Reports Intellect is your one-stop solution for everything related to market research and market intelligence. We understand the importance of market intelligence and its need in todays competitive world.

Our professional team works hard to fetch the most authentic research reports backed with impeccable data figures which guarantee outstanding results every time for you.So whether it is the latest report from the researchers or a custom requirement, our team is here to help you in the best possible way.

Contact Us:

sales@reportsintellect.comPhone No: + 1-706-996-2486US Address:225 Peachtree Street NE,Suite 400,Atlanta, GA 30303

Read the original here:
Detailed Analysis and Report on Topological Quantum Computing Market By Microsoft, IBM, Google. - New Day Live

Superconductivity? Stress is the Answer For Once – Cornell University The Cornell Daily Sun

Weve been told all of our lives to avoid stress but in physics, stress might just be the key to unlocking the secret of superconductivity.

Superconductivity, the phenomenon in which the electrical resistance of a material suddenly drops to zero when cooled below a certain temperature, has been a scientific curiosity ever since its discovery in the early 20th century.

A group of Cornell researchers led by Prof. Katja Nowack, physics, published a paper on Oct. 11 in Science that investigates how physically deforming a material can cause it to show traits of partial superconductivity.

The interest first arose in the work of collaborator Philip Moll, a researcher at the Institute of Material Science and Engineering at cole Polytechnique Fdral de Lausanne in Switzerland, during his investigation of the superconductive properties of the metal cerium iridium indium-5.

In an attempt to establish superconductivity, Moll discovered that the critical temperature was changing depending on the placement of the wire contacts. This collides directly with the conventional belief of superconductivity, which is that the entire material must be either completely, uniformly superconductive, or not.

Nowack learned of these strange results from Prof. Brad Ramshaw, physics, and decided to investigate them using a device called a superconducting quantum interference device, which can measure local resistivities of small areas.

What we found in the end was that in these little microstructures, superconductivity doesnt uniformly form in the device, but forms in a very spatially modulated, nonuniform fashion. So theres these little puddles of superconductivity in some parts of the device, and other parts stay non-superconductive down to much lower temperatures, Nowack said.

They also discovered that these superconductive puddles correlated to the varying amounts of physical stress produced from the creation of the samples. Molls team had created the samples by gluing CeIrIn5 crystals to a sapphire substrate and etching patterns into them using a focus ion beam, similar to a mini-sandblower.

According to Nowack, CeIrIn5 shrinks by about 0.3 percent as it cools due to its metallic properties, whereas sapphire does not shrink at all. The resulting strain seemed to be causing the irregular superconductivity noticed by Moll.

Actually in the literature, it was known that the superconducting transition temperature of the material must depend on strain, Nowack said. However, only some simple strains, like a single stretch along one axis, had been tested. Using this theory, the Cornell group developed a model for relating strain to superconductivity, and upon comparing their models predictions to the more complex deformations of the CeIrIn5 samples, found that the findings correlated exactly.

These findings open up a whole host of possible applications. This correlation between strain and superconductivity may become a new way of investigating the superconductive properties of other metals, which in turn could help refine physicists understanding of this relationship even further.

The group hopes to investigate how these new discoveries could affect existing devices, like the Josephson junction, a device which utilizes two superconductors and has applications in quantum computing. Were [also] thinking we can apply this to interesting magnetic systems that have interesting magnetic order, and change the properties of the magnetic order using strain, Nowack said.

Link:
Superconductivity? Stress is the Answer For Once - Cornell University The Cornell Daily Sun

Artificial Intelligence What it is and why it matters | SAS

The term artificial intelligence was coined in 1956, but AI has become more popular today thanks to increased data volumes, advanced algorithms, and improvements in computing power and storage.

Early AI research in the 1950s explored topics like problem solving and symbolic methods. In the 1960s, the US Department of Defense took interest in this type of work and began training computers to mimic basic human reasoning. For example, the Defense Advanced Research Projects Agency (DARPA) completed street mapping projects in the 1970s. And DARPA produced intelligent personal assistants in 2003, long before Siri, Alexa or Cortana were household names.

This early work paved the way for the automation and formal reasoning that we see in computers today, including decision support systems and smart search systems that can be designed to complement and augment human abilities.

While Hollywood movies and science fiction novels depict AI as human-like robots that take over the world, the current evolution of AI technologies isnt that scary or quite that smart. Instead, AI has evolved to provide many specific benefits in every industry. Keep reading for modern examples of artificial intelligence in health care, retail and more.

Why is artificial intelligence important?

Read more here:
Artificial Intelligence What it is and why it matters | SAS