How Will Health Care Regulators Address Artificial Intelligence? – The Regulatory Review

Policymakers around the world are developing guidelines for use of artificial intelligence in health care.

Baymax, the robotic health aide and unlikely hero from the movie Big Hero 6, is an adorable cartoon character, an outlandish vision of a high-tech future. But underlying Baymaxs character is the very realistic concept of an artificial intelligence (AI) system that can be applied to health care.

As AI technology advances, how will regulators encourage innovation while protecting patient safety?

AI does not have a precise definition, but the term generally describes machines that have the capacity to process and respond to stimulation in a manner similar to human thought processes. Many industriessuch as the military, academia, and health carerely on AI today.

For decades, health care professionals have used AI to increase efficiency and enhance the quality of patient care. For example, radiologists employ AI to identify signs of certain diseases in medical imaging. Tech companies are also partnering with health care providers to develop AI-based predictive models to increase the accuracy of diagnoses. A recent study applied AI to predict COVID-19 based on self-reported symptoms.

In the wake of the COVID-19 pandemic and the rise of telemedicine, experts predict that AI technology will continue to be used to prevent and treat illness and will become more prevalent in the health care industry.

The use of AI in health care may improve patient care, but it also raises issues of data privacy and health equity. Although the health care sector is heavily regulated, no regulations target the use of AI in health care settings. Several countries and organizations, including the United States, have proposed regulations addressing the use of AI in health care, but no regulations have been adopted.

Even beyond the context of health care, policymakers have only begun to develop rules for the use of AI. Some existing data privacy laws and industry-specific regulations do apply to the use of AI, but no country has enacted AI-specific regulations. In January 2021, the European Union released its proposal for the first regulatory framework for the use of AI. The proposal establishes a procedure for new AI products entering the market and imposes heightened standards for applications of AI that are considered high risk.

The EUs suggested framework provides some examples of high-risk applications of AI that are related to health care such as the use of AI to triage emergency aid. Although the EUs proposal does not focus on the health care industry in particular, experts predict that the EU regulations will serve as a framework for future, more specific guidelines.

The EUs proposal strikes a balance between ensuring the safety and security of the AI market, while also continuing to promote innovation and investment in AI. These conflicting values also appear in U.S. proposals to address AI in health care. Both the U.S. Food and Drug Administration (FDA) and the U.S. Department of Health and Human Services (HHS) more broadly have begun to develop guidelines on the use of AI in the health industry.

In 2019, FDA published a discussion paper outlining a proposed regulatory framework for modifications to AI-based software as a medical device (SaMD). FDA defines AI-based SaMD as software intended to treat, diagnose, cure, mitigate, or prevent disease. In the agencys discussion paper, FDA asserts its commitment to ensure that AI-based SaMD will deliver safe and effective software functionality that improves the quality of care that patients receive. FDA outlines the regulatory approval cycle for AI-based SaMD, which requires a holistic evaluation of the product and the maker of the product.

Earlier this year, FDA released an action plan for the regulation of AI-based SaMD that reaffirmed its commitment to encourage the development of AI best practices. HHS has also announced its strategy for the regulation of AI applied in health care settings. As with FDA and the EU, HHS balances the health and well-being of patients with the continued innovation of AI technology.

The United States is not alone in its attempt to monitor and govern the use of AI in health care. Countries such as China, Japan, and South Korea have also released guidelines and proposals seeking to ensure patient safety. In June 2021, the World Health Organization (WHO) issued a report on the use of AI in health care and offered six guiding principles for AI regulation: protecting autonomy; promoting safety; ensuring transparency; fostering responsibility; ensuring equity; and promoting sustainable AI.

Scholars are also discussing the use of AI in health care. Some experts have urged policymakers to develop AI systems designed to advance health equity. Others warn that algorithmic bias and unequal data collection in AI can exacerbate existing health inequalities. Experts argue that, to mitigate the risk of discriminatory AI practices, policymakers should consider the unintended consequences of the use of AI.

For example, AI systems must be trained to recognize patterns in data, and the training data may reflect historical discrimination. One study showed that women are less likely to receive certain treatments than men even though they are more likely to need them. Similarly biased data would train an AI system to perpetuate this pattern of discrimination. Health care regulators must address the need to protect patients from potential inequalities without discouraging the development of life-saving innovation in AI.

As the use of AI becomes more prominent in health care, regulators in the United States and elsewhere find themselves considering more robust regulations to ensure quality of care.

Read more here:
How Will Health Care Regulators Address Artificial Intelligence? - The Regulatory Review

Is it True that the USA Has Already Lost the Artificial Intelligence Battle with China? – BBN Times

China is overtaking the U.S. in artificial intelligence (AI), setting off alarm bells on the other side of the Pacific as the world's two largest economies are battling for world supremacy.

Artificial intelligence is widely used in a range of industries and greatly affects a nation's competitiveness and security.

The United States of America is losing the artificial intelligence supremacy to China.

The increasing importance of information inmilitaryand warfare is making digital technology and its applications, such as analytics, AI, and augmented reality, indispensable to future conflicts.

It is fascinating and at the same time scary to see what the future of war may look like, and how devastating the aftermath can be.

Artifiicial intelligence weapons can attack with increased speed and precision compared to existing military weapons.

Chinas global share of research papers in the field of AI has vaulted from 4.26%(1,086) in 1997 to 27.68% in 2017 (37,343), surpassing any other country in the world, including the U.S. a position itcontinues to hold.

Beijing also consistently files more AI patents than any other country. As of March 2019, the number of Chinese AI firms has reached1,189, second only to the U.S., which has more than 2,000 active AI firms. These firms focus more on speech (e.g., speech recognition, speech synthesis) and vision (e.g., image recognition, video recognition) than their overseas counterparts.

It is very active with weaponizing artificial intelligence, machine learning and deep learning technology.

China's military application of AI includes unmanned intelligent combat systems, enhancing battlefield situational awareness and decision-making, conducting multi- domain offense and defense, and facilitating advanced training, simulation, and wargaming practices.

As an example, the launch in August of nuclear-capable rocket that circled the globe took US intelligence by surprise.

AFP via Getty Images

China recently tested a nuclear-capable manoeuvrable missile and Russia and the US have their own programmes. China's large population gives it advantages in generating and utilizing big data, and its decades-long effort in promoting technology and engineering gives it a rich supply of high-quality computer scientists and engineers.

US National Intelligence has recently reported that a superpower needs to lead in five technologies:

Source: Forbes

Beijing has won the artificial intelligence battle with Washington and is heading towards global dominance because of its technological advances.

China is likely to dominate many of the key emerging technologies, particularly artificial intelligence, synthetic biology and genetics within a decade.

The country has a vibrant market that is receptive to these new AI-based products, and Chinese firms are relatively fast in bringing AI products and services to the market.

Chinese consumers are also fast in adopting such products and services. As such, the environment supports rapid refinement of AI technologies and AI-powered products.

Beijings market is conducive to the adoption and improvement of artificial intelligence.

Read this article:
Is it True that the USA Has Already Lost the Artificial Intelligence Battle with China? - BBN Times

UC adopts recommendations for the responsible use of Artificial Intelligence – Preuss School Ucsd

Camille Nebeker, Ed.D., associate professor with appointments in the UC San Diego Herbert Wertheim School of Public Health and Human Longevity Science and the Design Lab

The University of California Presidential Working Group on Artificial Intelligence was launched in 2020 by University of California President Michael V. Drake and former UC President Janet Napolitano to assist UC in determining a set of responsible principles to guide procurement, development, implementation, and monitoring of artificial intelligence (AI) in UC operations.

To support these goals, the working group developed a set of UC Responsible AI Principles and explored four high-risk application areas: health, human resources, policing, and student experience. The working group has published a final report that explores current and future applications of AI in these areas and provides recommendations for how to operationalize the UC Responsible AI Principles. The report concludes with overarching recommendations to help guide UCs strategy for determining whether and how to responsibly implement AI in its operations.

Camille Nebeker, Ed.D., associate professor with appointments in the UC San Diego Herbert Wertheim School of Public Health and Human Longevity Science and the Design Lab, and two researchers in theDepartment of Computer Science and Engineering,Nadia Henninger, Ph.D., associateprofessor whose work focuses on cryptography and security, and Lawrence Saul, Ph.D., professor whose research interests are machine learning and data analysis, were members of the working group.

The use of artificial intelligence within the UC campuses cuts across human resources, procurement, policing, student experience and healthcare. We, as an organization, did not have guiding principles to support responsible decision-making around AI, said Nebeker, who co-founded and directs the Research Center for Optimal Digital Ethics Health at UC San Diego, a multidisciplinary group that conducts research and provides education to support ethical digital health study practices.

The UC Presidential Working Group on AI has met over the past year to develop principles to advance responsible practices specific to the selection, implementation and management of AI systems.

With universities increasingly turning to AI-enabled tools to support greater efficiency and effectiveness, UC is setting an important precedent as one of the first universities, and the largest public university system, to develop governance processes for the responsible use of AI. More info is available on the UC Newsroom.

View post:
UC adopts recommendations for the responsible use of Artificial Intelligence - Preuss School Ucsd

Artificial Intelligence project aims to improve standards and development of AI systems – University of Birmingham

A new project has been launched in partnership with the University of Birmingham aiming to address racial and ethical health inequalities using artificial intelligence (AI).

STANDING Together, being led by University Hospitals Birmingham NHS Foundation Trust (UHB), aims to develop standards for datasets that AI systems use, to ensure they are diverse, inclusive and work across all demographic groups. The resulting standards will help regulators, commissioners, policymakers and health data institutions assess whether AI systems are underpinned by datasets that represent everyone, and dont leave underrepresented or minority groups behind.

Xiao Liu, Clinical Researcher in Artificial Intelligence and Digital Healthcare at the University of Birmingham and UHB, and STANDING Together project co-leader, said: Were looking forward to starting work on our project, and developing standards that we hope will improve the use of AI both in the UK and around the world. We believe AI has enormous potential to improve patient care, but through our earlier work on producing AI guidelines, we also know that there is still lots of work to do to make sure AI is a success stories for all patients. Through the STANDING Together project, we will work to ensure AI benefits all patients and not just the majority.

NHSX NHS AI Lab, the NIHR, and the Health Foundation have awarded in total 1.4m to four projects, including STANDING Together. The other organisations working with UHB and the University of Birmingham on STANDING Together are the Massachusetts Institute of Technology, Health Data Research UK, Oxford University Hospitals NHS Foundation Trust, and The Hospital for Sick Children (Sickkids, Toronto).

The NHS AI Lab introduced the AI Ethics Initiative to support research and practical interventions that complement existing efforts to validate, evaluate and regulate AI-driven technologies in health and care, with a focus on countering health inequalities. Todays announcement is the result of the Initiatives partnership with The Health Foundation on a research competition, enabled by NIHR, to understand and enable opportunities to use AI to address inequalities and to optimise datasets and improve AI development, testing and deployment.

Brhmie Balaram, Head of AI Research and Ethics at NHSX, said: We're excited to support innovative projects that demonstrate the power of applying AI to address some of our most pressing challenges; in this case, we're keen to prove that AI can potentially be used to close gaps in minority ethnic health outcomes. Artificial intelligence has the potential to revolutionise care for patients, and we are committed to ensuring that this potential is realised for all patients by accounting for the health needs of diverse communities."

Dr Indra Joshi, Director of the NHS AI Lab at NHSX, added: As we strive to ensure NHS patients are amongst the first in the world to benefit from leading AI, we also have a responsibility to ensure those technologies dont exacerbate existing health inequalities.These projects will ensure the NHS can deploy safe and ethical Artificial Intelligence tools that meet the needs of minority communities and help our workforce deliver patient-centred and inclusive care to all.

Excerpt from:
Artificial Intelligence project aims to improve standards and development of AI systems - University of Birmingham

The Fundamental Flaw in Artificial Intelligence & Who Is Leading the AI Race? Artificial Human Intelligence vs. Real Machine Intelligence – BBN…

The Fundamental Flaw in Artificial Intelligence & Who Is Leading the AI Race? Artificial Human Intelligence vs. Real Machine Intelligence

Artificial intelligence is impacting every single aspect of our future, but it has a fundamental flaw that needs to be addressed.

The fundamental flaw of artificial intelligence is that it requires a skilled workforce. Apple is currently leading the race of artificial intelligence by acquiring 29 AI startups since 2010.

Success in creating effective AI, could be the biggest event in the history of our civilization. Or the worst. We just don't know. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it.

Stephen Hawking

Source: Reuters

Artificial intelligence is reduced to the following definitions:

1:a branch of computer science dealing with the simulation of intelligent behavior in computers; the capability of a machine to imitate intelligent human behavior;

2: an area of computer science that deals with giving machines the ability to seem like they have human intelligence;

3:the ability of a digitalcomputeror computer-controlledrobotto perform tasks commonly associated with intelligent beings; systems endowed with theintellectualprocesses characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience;

4: system that perceives its environment and takes actions that maximize its chance of achieving its goals;

5: machines that mimic cognitive functions that humans associate with thehuman mind, such as learning and problem solving.

Source: Deloitte

The purpose of artificial intelligence isto enable computers and machines to perform intellectual taskssuch as problem solving, decision making, perception, and understanding human communication.

In fact, today's AI is not copying human brains, mind, intelligence, cognition, or behavior. It is all about advanced hardware, software and dataware, information processing technology, big data collection, big computing power. As it is rightly noted at the Financial Times Future Forum The Impact of Artificial Intelligence on Business and Society:Machines will outperform us not by copying us but by harnessing the combination of colossal quantities of data, massive processing power and remarkable algorithms.

They are advanced data-processing systems: weak or narrow AI applications, neural networks, machine learning, deep learning, multiple linear regression, RFM modeling, cognitive computing, predictive intelligence/analytics, language models, or knowledge graphs. Be it cognitive APIs (face, speech, text etc.),the Microsoft Azure AI platform, web searches or self-driving transportation, GPT-3-4-5 or BERT, Microsoft' KG, Google's KG orDiffbot, training their knowledge graph on the entire internet, encoding entities like people, places and objects into nodes, connected to other entities via edges.

Source: DZone

Today's"AI is meaningless" and "often just a fancy name for a computer program", software patches, like bug fixes, to legacy software or big databases to improve their functionality,security, usability, orperformance.

Such machines are not yet self-aware and they cannot understand context, especially in language. Operationally, too, they are limited by the historical data from which they learn, and restricted to functioning within set parameters.

Lucy Colback

Todays artificial intelligence (AI) is limited. It still hasa long way to go.

Artificial intelligence can be duped by scenarios it has never seen before.

With AI playing an increasingly major role in modern software and services, each major tech firm is battling to develop robust machine-learning technology for use in-house and to sell to the public via cloud services.

However most of the tech companies are still struggling to unlock the real power of artificial intelligence.

Today's artificial intelligence is at best narrow.Narrow artificial intelligence is what we see all around us in computers today -- intelligent systems that have been taught or have learned how to carry out specific tasks without being explicitly programmed how to do so.

Acording to CB Insights, artificial intelligence companies are a prime acquisition target for companies looking to leverage AI tech without building it from scratch. In the race for AI, this is who's leading the charge.

The usual suspects are leading the race for AI: tech giants like Facebook, Amazon, Microsoft, Google, and Apple (FAMGA) have all been aggressively acquiring AI startups for the last decade.

Among FAMGA, Apple leads the way. With 29 total AI acquisitions since 2010, the company has made nearly twice as many acquisitions as second-place Google (the frontrunner from 2012 to 2016), with 15 acquisitions.

Apple and Google are followed by Microsoft with 13 acquisitions, Facebook with 12, and Amazon with 7.

Source: CB Insights

Apples AI acquisition spree, which has helped it overtake Google in recent years, has been essential to the development of new iPhone features. For example, FaceID, the technology that allows users to unlock their iPhones by looking at them, stems from Apples M&A movesin chips and computer vision, including the acquisition of AI companyRealFace.

In fact, many of FAMGAs prominent products and services such as Apples Siri or Googles contributions to healthcare through DeepMind came out ofacquisitions of AI companies.

Other top acquirers include major tech players like Intel, Salesforce, Twitter, and IBM.

Source: Analytics Steps

Artificial Intelligence with robotics is poised to change our world from top to bottom, promising to help solve some of the worlds most pressing problems, from healthcare to economics to global crisis predictions and timely responses.

But while adopting and integrating and implementing AI technologies, as aDeloitte reportsays, around 94% of the enterprises face potential problems.

This article is not about the AI problems, such as the lack of technical know-how, data acquisition and storage, transfer learning, expensive workforce, ethical or legal challenges, big data addiction, computation speed, black box, narrow specialization, myths & expectations and risks, cognitive biases, or price factor. It is not our subject to discuss why small and mid-sized organizations struggle to adopt costly AI technologies, while big firms like Facebook, Apple, Microsoft, Google, Amazon, IBM allocate a separate budget for acquiring AI startups.

Instead, we focus on the AI itself, as the biggest issue, with its three fundamental problems looking for fundamental solutions in terms of Real Human-Machine Intelligence, as briefed below.

First, it is about AI philosophy, or rather lack of any philosophy, and blindly relying on observations and empirical data or statistics, its processes, algorithms, and inductive inferences, needing a large volume of big data as the fuel to train the model for the special tasks of the classifications and the predictions in very specific cases.

Second, today's AI is not a scientific AI that agrees with the rules, principles, and method of science. Todays AI is failing to deal with reality and its causality and mentality strictly following a scientific method of inquiry depending upon the reciprocal interaction of generalizations (hypothesis, laws, theories, and models) and observable/experimental data. Most ML models tuned and tweaked to best perform in labs fail to work in real settings of the real world at a wide range of different AI applications, from image recognition to natural language processing (NLP) to disease prediction due to data shift, under-specification or something else. The process used to build most ML models today cannot tell which models will work in the real world and which ones wont.

Third, extremeanthropomorphism in today's AI/ML/DL, "attributing distinctively human-like feelings, mental states, and behavioral characteristics to inanimate objects, animals, religious figures, the environment, and technological artifacts (from computational artifacts to robots)". Anthropomorphism permeates AI R & TD & D & D, making the very language of computer scientists, designers, and programmers, as "machine learning", which is not any human-like learning, "neural networks", which are not any biological neural networks, or "artificial intelligence", which is not any human-like intelligence. What entails the whole gamut of humanitarian issues, like AI ethics and morality, responsibility and trust, etc.

As a result, its trends are chaotic, sporadic and unsystematic, as theGartner Hype Cycle for Artificial Intelligence 2021demonstrates.

Source: Gartner

In consequence, there is no common definition of AI, and each one sees AI in its own way, mostly marked by an extreme anthropomorphism replacing real machine intelligence (RMI) with artificial human intelligence (AHI).

Source: Econolytics

Generally, there are two groups of ML/AI researchers, AI specialists and ML generalists.

Most AI folks are narrow specialists, 99.999%, involved with different aspects of the Artificial Human Intelligence (AHI), where AI is about programming human brains/mind/intelligence/behavior in computing machines or robots.

Artificial Human Intelligence (AHI) is sometimes defined as the ability of a machine to perform cognitive functions we associate with human minds, such as perceiving, reasoning, learning, interacting with the environment, problem solving, and even exercising creativity.

The EC High-Level Expert Group on artificial intelligence has formulated its own specific behaviorist definition.

Artificial intelligence (AI) refers to systems that display intelligent behaviour by analysing their environment and taking actions with some degree of autonomy to achieve specific goals

Artificial intelligence (AI) refers to systems designed by humans that, given a complex goal, act in the physical or digital world by perceiving their environment, interpreting the collected structured or unstructured data, reasoning on the knowledge derived from this data and deciding the best action(s) to take (according to predefined parameters) to achieve the given goal. AI systems can also be designed to learn to adapt their behaviour by analysing how the environment is affected by their previous actions''.

In all, the AHI is fragmented as in:

Very few of MI/AI researchers (or generalists), 00.0001%, know that Real MI is about programming reality models and causal algorithms in computing machines or robots.

The first group lives on the anthropomorphic idea of AHI of ML, DL and NNs, dubbed as a narrow, weak, strong or general, superhuman or superintelligent AI, or Fake AI simply. Its machine learning models are built on the principle of statisticalinduction: inferring patterns from specific observations, doing statistical generalization from observations or acquiring knowledge from experience.

This inductive approach is useful for building tools for specific tasks on well-defined inputs; analyzing satellite imagery, recommending movies, and detecting cancerous cells, for example. But induction is incapable of the general-purpose knowledge creation exemplified by the human mind. Humans develop general theories about the world, often about things of which weve had no direct experience.

Whereas induction implies that you can only know what you observe, many of our best ideas dont come from experience. Indeed, if they did, we could never solve novel problems, or create novel things. Instead, we explain the inside of stars, bacteria, and electric fields; we create computers, build cities, and change nature feats of human creativity and explanation, not mere statistical correlation and prediction.

The second advances a true and real AI, which is programming general theories about the world, instead of cognitive functions and human actions, dubbed as the real-world AI, or Transdisciplinary AI, the Trans-AI simply.

To summarize the hardest ever problem, the philosophical and scientific definitions of AI are of two polar types, subjective, human-dependent, and anthropomorphic vs. objective, scientific and reality-related.

So, we have a critical distinction, AHI vs. Real AI, and should choose and follow the true way.

Todays narrow AI advances are due to the computing brute force: the rise of big data combined with the emergence of powerful graphics processing units (GPUs) for complex computations and the re-emergence of a decades-old AI computation modelthe compute-hungry machine deep learning. Its proponents are now looking for a new equation for future AI innovation, that includes the advent of small data, more efficient deep learning models, deep reasoning, new AI hardware, such as neuromorphic chips or quantum computers, and progress toward unsupervised self-learning and transfer learning.

Ultimately, researchers hope to create future AI systems that do more than mimic human thought patterns like reasoning and perceptionthey see it performing an entirely new type of thinking. While this might not happen in the very next wave of AI innovation, its in the sights of AI thought leaders.

Considering the existential value of AI Science and Technology, we must be absolutely honest and perfectly fair here.

Todays AI is hardly any real and true AI, if you automate the statistical generalization from observations, with data pattern matching, statistical correlations, and interpolations (predictions), as the AI4EU is promoting.

Todays AI is narrow. Applying trained models to new challenges requires an immense amount of new data training, and time. We need AI that combines different forms of knowledge, unpacks causal relationships, and learns new things on its own.

Such a defective AI can only compute what it observes being fed with its training data, for very special tasks on well-defined inputs: blindly text translating, analyzing satellite imagery, recommending movies, or detecting cancerous cells, for example. By the very design it is incapable of general-purpose knowledge creation, where the beauty of intelligence is sitting.

Their machine learning models are built on the principle ofinduction: inferring patterns from specific observations or acquiring knowledge from experience, focused on big-data the more observations, the better the model. They have to feed their statistical algorithm millions of labelled pictures of cats, or millions of games of chess to reach the best prediction accuracy.

As the article,The False Philosophy Plaguing AI,wisely noted:

In fact, most of science involves the search for theories which explain the observed by the unobserved. We explain apples falling with gravitational fields, mountains with continental drift, disease transmission with germs. Meanwhile, current AI systems are constrained by what they observe, entirely unable to theorize about the unknown.

Again, no big data can lead you to a general principle, law, theory, or fundamental knowledge. That is the damnation of induction, be it mathematical or logical or experimental.

Due to lack of a deep conceptual foundation, todays AI is closely associated with its logical consequences,AI will automate entirety and remove people out of work,AI is totally a science-fiction based technology, orRobots will command the world?It is misrepresented as thetop five myths about Artificial Intelligence:

That means we need the true, real and scientific AI, not AHI, as the Real-World Machine Intelligence and Learning, or the Trans-AI, simulating and modeling reality, physically, mental or virtual, with its causality and mentality, as reflected in the real superintelligence (RSI).

Last not last, the transdisciplinary technology is S. Hawkings called effective and human-friendly AI and what the Googles founder is dreaming aboutAI would be the ultimate version of Google. The ultimate search engine would understand everything on the web. It would understand exactly what you wanted, and it would give you the right thing. Larry Page

Our approach to artificial intelligence is fundamentally wrong by not training and developing a skilled workforce capable of handling AI. Weve thought about AI the wrong way by focusing on algorithms instead of finding solutions to make AI better and unbiased.

Artificial intelligence has to be optimized based on human preferences so that it solves real problems. Apple is currently leading the race but it's a very competitive battle. American and Chinese tech companies are ahead of European tech companies when it comes to artificial intelligence.

A lot of work will need to be done to avoid the negative consequences of artificial intelligence especially with the adventof artificial superintelligence. The sooner we begin regulating artificial intelligence, the better equipped we will be to mitigate and manage the dark side of artificial intelligence.

Transdisciplinary artificial intelligence as a responsible global man-machine intelligence has all potential to help solve several problems related to AI and consequently improve the lives of billions.

Original post:
The Fundamental Flaw in Artificial Intelligence & Who Is Leading the AI Race? Artificial Human Intelligence vs. Real Machine Intelligence - BBN...

From Tiktok to bear mascots 7 ways education is recruiting cyber talent – EdScoop

Researchers at Kennesaw State University in Georgia developed virtual reality-based lessons and gamified learning software to help K-12 students develop cybersecurity skills.

Cybersecurity is not yet an official part of school curriculums, yet we are living in an increasingly digital world, Kennesaw professor Joy Li said in a press release. This presented us a wonderful opportunity to make an impact on education by using games, which has become one of the most efficient ways to grab their attention. On a secondary level, we hope that this kind of exposure will encourage kids to pursue careers in cybersecurity.

The University of Texas at San Antonios cybersecurity center developed games for K-12 students, both in digital and physical card formats. The games, designed for children as young as five years old, teach vocabulary and general cybersecurity concepts, like cryptography. One of the games introduces cybersecurity using bear mascots, called the CyBear family, which is complete with four bear characters named after famous computer scientists: Alan Turing, Grace Hopper, Augusta Ada King and Vint Cerf.

Read the rest here:
From Tiktok to bear mascots 7 ways education is recruiting cyber talent - EdScoop

Blockchain explained: Breaking down the technology thats transforming the world of finance – Euronews

When you think about blockchains, probably the first thing that comes to mind is Bitcoin or cryptos.

But actually, the technology is extremely versatile and has potential far beyond cryptocurrencies.

Blockchains have become popular over the past few years because they allow us to secure and verify all kinds of data in a decentralised network that cannot be altered.

The idea has its roots as far back as 1991, when two computer scientists, Stuart Haber and Scott Stornetta, proposed a system to protect timestamps on documents from being interfered with.

Satoshi Nakamoto, the anonymous Bitcoin inventor, then built on this system and referenced the two scientists in his Bitcoin whitepaper.

He successfully deployed the first public blockchain in 2009.

Put simply, a blockchain is a database in the form of a distributed ledger that uses cryptography to secure any kind of information.

This ledger takes the form of a series of records or blocks that are each added onto the previous block in the chain, hence the name blockchain.

Each block contains a timestamp, data, and a hash. This is a unique identifier for all the contents of the block, sort of like a digital fingerprint.

Crucially, once data has been recorded and verified in a block, it cannot be altered. Instead, if a change has to be made, this is recorded and verified in a new block which is then added to the chain.

Each new block reinforces the verification of the previous block and hence the entire blockchain.

The block also contains the hash of the previous block in the chain. These are the backbone of a public blockchain.

Its how all the participants in a public, decentralised network can come to a consensus on how a block is verified and added to the chain.

A cryptographic hash function is basically a mathematical algorithm that maps data of arbitrary length to an output of fixed length.

So, if you want to represent, for example, a list of names of varying lengths, a hash function would output each of these names (the data) into a unique string of numbers of a fixed length. This string of numbers is known as the hash.

The hash function will return the same hash no matter how many times you input the same data.

If you even slightly change the inputted data, the hash will change completely.

Hashing is considered a function that only works one-way. Thats because its highly infeasible - but not impossible - to reverse engineer the data that outputs a given hash without a huge, huge amount of computational power.

The fastest way to guess the data that produces a given hash is simply to guess and check, over and over again.

In the Bitcoin blockchain, which uses a proof of work consensus mechanism, computers in the network join in this elaborate guessing game hoping to solve the puzzle first.

The computer with higher computational power - meaning the capability to run through more guesses faster - is more likely to win the race and therefore verify the block for the reward of Bitcoin.

Its important to remember that the word blockchain doesnt describe any single database or network. Rather, its a type of technology and there are different kinds of blockchains that work in different ways.

A public blockchain like Bitcoin, allows anyone to join the network and access the distributed ledger.

A private blockchain is a closed network. It still uses some decentralisation and a peer to peer system, but overall this kind is controlled by a single entity and access is restricted to a defined network.

A hybrid blockchain is a combination of a public and private blockchain. This kind of blockchain allows an entity to distribute a ledger with some publicly accessible data but also restrict access to more sensitive data within the network.

A consortium blockchain has similarities with a private blockchain only. This type of ledger is controlled by multiple entities rather than a single one.

Here is the original post:
Blockchain explained: Breaking down the technology thats transforming the world of finance - Euronews

Heres whats next for the Bitcoin price: expert panel – The Motley Fool Australia

The Bitcoin (CRYPTO: BTC) price has retreated from Wednesdays new all-time highs of US$66,930 (AU$89,240).

The digital tokens lost 6% since then, currently trading for US$62,845.

Interest in the worlds biggest crypto remains elevated, with more than US$45 billion worth changing virtual hands over the past 24 hours, according to data from CoinMarketCap.

With that level of interest in mind, the Motley Fool reached out to 3 crypto experts for their take on BITO, the new US listed, futures-based Bitcoin exchange traded fund (ETF), and their forecasts for where the Bitcoin price could be heading next.

(For details on the launch of the ProShares Bitcoin Strategy ETF(NYSE: BITO), go here.)

Now, on to our expert panel:

The Motley Fool: The launch of BITO garnered a lot of investor excitement and looks to have helped drive the Bitcoin price to new highs. What are your thoughts on a futures-based Bitcoin ETF, and will we ever see something similar on the ASX?

Jonathon Miller: The launch of a Bitcoin ETF is an exciting moment for the maturation of the digital assets industry and a good measure of where Bitcoin is in its adoption journey.

The timing of the BITO launch is also significant in that it went live when the Bitcoin price was reaching all-time highs. We saw US$1 billion in trading volume on the first day which is a great achievement, and another of the many positive news stories we have seen lately for crypto adoption.

We can expect that Australian regulators are watching what happens in the US and will use this as a framework for decisions on local products. Its hard to predict when this will happen, but the success of BITO so far is a very positive thing.

Peter Kazacos: Anything that makes it easier for investors to get exposure to an asset is a good thing for that asset. In the case of BITO, its a good thing for Bitcoin. The ETF means large institutional investors and investment houses can easily participate in a very traditional sense in the fortunes of BTC. A futures-based ETF like BITO paves the way to a spot ETF in the near term, which would be a significant milestone and have a positive impact on the Bitcoin price.

It is likely that we will one day see an Australian Bitcoin ETF as demand for the asset continues globally.

Simon Peters: While ProShares (BITO) is not an ETF holding the underlying asset that many in the crypto community want to see, its still a step forward in the right direction.

ABitcoin futures ETF now provides a convenient way for investors to get exposure to the Bitcoin price movement. However, investors who plan to hold for the longer term would need to take into account hidden fees within the futures ETF. Contracts will have to roll every month, and this could erode potential gains.

BITO saw a strong first day of trading. However, with more Bitcoin futures ETFs in the approval pipeline, whether this particular ProShares Bitcoin futures ETF can carry this momentum forward, well see.

Motley Fool: After posting a new all-time high this week, what is your outlook for the Bitcoin price movement?

Jonathon Miller: This rally has been driven by an incredible year of crypto adoption news for Bitcoin as well as Ethereum. The two coins have both shared leading roles in the news cycle, dragging each other down and bringing each other up in the market.

The all-time Bitcoin price high earlier this year was largely due to institutional interest where we saw adoption from big names such as Fidelity, Tesla and PayPal.

There is no way to predict the market, but its important to highlight that Bitcoin has scarcity with only 21 million in total in supply. And there are a lot more people in the world than that. The space is moving very quickly, and we know from Kraken Intelligence reports that the final quarter of the year has historically been the most bullish.

However, after price hikes, there is always the risk that we will see price drops as people look to take a profit.

Peter Kazacos: Mass adoption is the buzz word for any Bitcoin maximalist. If we see more mass adoption, which we define as BTC entering the traditional financial system, we will see more demand for the asset, which will fuel Bitcoin price increases.

If Bitcoin finds more champions like Jack Dorsey from Twitter and President Bukele from El Salvador we could very well see a US$100,000 Bitcoin price in the near future.

Advances in technology are the biggest risk for Bitcoin. Specifically the advent of quantum computing, which could break current cryptography. Kaz has a solution which uses quantum technology to upgrade the cryptography of existing protocols like BTC.

Quantum Assets on the Binance Smart Chain are the first crypto to adopt our quantum technology and are using it to launch Quantum Bitcoin in a bid to ensure the cryptography of Bitcoin remains safe and secure.

Simon Peters: Now that weve seen a new all-time Bitcoin price high, the question is turning to whether well see a pull back or will the price carry on. Given the price run in the last few weeks, the Bitcoin price is somewhat overextended and we could (very soon) see a pullback in the short term as some investors and traders take some profit off the table.

Long term, on-chain metrics continue to be bullish. More of the circulating Bitcoin supply is continuing to migrate from short-term holders to long-termholders, which is squeezingsupply. Simultaneously, inflation concerns could increase demand, with institutional and retail investors exploring alternative assets like Bitcoin rather than traditional inflation hedges or holding cash.

Also taking into account seasonality, the fourth quarter tends to be a strong time of the year for crypto bull markets. Refer back to 2017 for example. So, I wouldnt rule out higher prices than where we are currently by the end of 2021, possibly into the six-figure zone.

The Motley Fool will end with a recap of Jonathon Millers words, There is no way to predict the market.

While the Bitcoin price could head into the six-figure range from here, it could also go the other way.

Invest with care.

View original post here:
Heres whats next for the Bitcoin price: expert panel - The Motley Fool Australia

Apples plan to scan images will allow governments into smartphones – The Guardian

For centuries, cryptography was the exclusive preserve of the state. Then, in 1976, Whitfield Diffie and Martin Hellman came up with a practical method for establishing a shared secret key over an authenticated (but not confidential) communications channel without using a prior shared secret. The following year, three MIT scholars Ron Rivest, Adi Shamir and Leonard Adleman came up with the RSA algorithm (named after their initials) for implementing it. It was the beginning of public-key cryptography at least in the public domain.

From the very beginning, state authorities were not amused by this development. They were even less amused when in 1991 Phil Zimmermann created Pretty Good Privacy (PGP) software for signing, encrypting and decrypting texts, emails, files and other things. PGP raised the spectre of ordinary citizens or at any rate the more geeky of them being able to wrap their electronic communications in an envelope that not even the most powerful state could open. In fact, the US government was so enraged by Zimmermanns work that it defined PGP as a munition, which meant that it was a crime to export it to Warsaw Pact countries. (The cold war was still relatively hot then.)

In the four decades since then, theres been a conflict between the desire of citizens to have communications that are unreadable by state and other agencies and the desire of those agencies to be able to read them. The aftermath of 9/11, which gave states carte blanche to snoop on everything people did online, and the explosion in online communication via the internet and (since 2007) smartphones, has intensified the conflict. During the Clinton years, US authorities tried (and failed) to ensure that all electronic devices should have a secret backdoor, while the Snowden revelations in 2013 put pressure on internet companies to offer end-to-end encryption for their users communications that would make them unreadable by either security services or the tech companies themselves. The result was a kind of standoff: between tech companies facilitating unreadable communications and law enforcement and security agencies unable to access evidence to which they had a legitimate entitlement.

In August, Apple opened a chink in the industrys armour, announcing that it would be adding new features to its iOS operating system that were designed to combat child sexual exploitation and the distribution of abuse imagery. The most controversial measure scans photos on an iPhone, compares them with a database of known child sexual abuse material (CSAM) and notifies Apple if a match is found. The technology is known as client-side scanning or CSS.

Powerful forces in government and the tech industry are now lobbying hard for CSS to become mandatory on all smartphones. Their argument is that instead of weakening encryption or providing law enforcement with backdoor keys, CSS would enable on-device analysis of data in the clear (ie before it becomes encrypted by an app such as WhatsApp or iMessage). If targeted information were detected, its existence and, potentially, its source would be revealed to the agencies; otherwise, little or no information would leave the client device.

CSS evangelists claim that its a win-win proposition: providing a solution to the encryption v public safety debate by offering privacy (unimpeded end-to-end encryption) and the ability to successfully investigate serious crime. Whats not to like? Plenty, says an academic paper by some of the worlds leading computer security experts published last week.

The drive behind the CSS lobbying is that the scanning software be installed on all smartphones rather than installed covertly on the devices of suspects or by court order on those of ex-offenders. Such universal deployment would threaten the security of law-abiding citizens as well as lawbreakers. And even though CSS still allows end-to-end encryption, this is moot if the message has already been scanned for targeted content before it was dispatched. Similarly, while Apples implementation of the technology simply scans for images, it doesnt take much to imagine political regimes scanning text for names, memes, political views and so on.

In reality, CSS is a technology for what in the security world is called bulk interception. Because it would give government agencies access to private content, it should really be treated like wiretapping and regulated accordingly. And in jurisdictions where bulk interception is already prohibited, bulk CSS should be prohibited as well.

In the longer view of the evolution of digital technology, though, CSS is just the latest step in the inexorable intrusion of surveillance devices into our lives. The trend that started with reading our emails, moved on to logging our searches and our browsing clickstreams, mining our online activity to create profiles for targeting advertising at us and using facial recognition to allow us into our offices now continues by breaching the home with smart devices relaying everything back to motherships in the cloud and, if CSS were to be sanctioned, penetrating right into our pockets, purses and handbags. That leaves only one remaining barrier: the human skull. But, rest assured, Elon Musk undoubtedly has a plan for that too.

Wheels within wheelsIm not an indoor cyclist but if I were, The Counterintuitive Mechanics of Peloton Addiction, a confessional blogpost by Anne Helen Petersen, might give me pause.

Get out of hereThe Last Days of Intervention is a long and thoughtful essay in Foreign Affairs by Rory Stewart, one of the few British politicians who always talked sense about Afghanistan.

The insiderBlowing the Whistle on Facebook Is Just the First Step is a bracing piece by Maria Farrell in the Conversationalist about the Facebook whistleblower.

Read more here:
Apples plan to scan images will allow governments into smartphones - The Guardian

Hillsu Debuts as a Public Crypto Exchange in the United States – StreetInsider.com

Get inside Wall Street with StreetInsider Premium. Claim your 1-week free trial here.

New York, New York--(Newsfile Corp. - October 21, 2021) - Hillsu is a trusted digital asset exchange that enables consumers to buy, sell, store and exchange digital assets. Hillsu's consumer platform is now available through the recently-released the Hillsu app.

"Today, Hillsu's vision - to connect the digital economy - reaches new heights, and we're excited to continue our momentum as a public exchange," said Leonard M. Adleman, CEO of Hillsu. "Our platform sits at the intersection of cryptocurrency exchange, payments, and safety. We look forward to accelerating the plan that is already underway: building out a broader partner network, expanding the access and utility of digital assets, and gaining momentum in a space that is continuing to grow."

The Hillsu platform has seen strong growth since its founding in 2020. Last month, the company announced that more than millions of users have been using the Hillsu app, only one year after its public launch.

Hillsu Integrates Bitcoin's Lightning Network

Hillsu has now integrated Bitcoin's Lightning Network after first announcing its plan to do so in April, 2020.

Hillsu users can now use the Lightning Network, a Layer-2 scaling solution for bitcoin, for deposits and withdrawals. The feature is currently live on Hillsu's mobile app.

Figure 1

To view an enhanced version of this graphic, please visit:https://orders.newsfilecorp.com/files/7987/100457_84d3108ba4b53b44_001full.jpg

With the Lightning Network, the average cost of bitcoin transactions will come down to "less than 0.01 cents," Hillsu CEO Leonard M. Adleman told us in September. Whereas the average transaction confirmation time will reduce to "1-3 seconds," Adleman said at the time.

The Lightning Network was launched in 2018. Several crypto exchanges currently support the network, including OKCoin, Bitfinex, and Bitstamp. Earlier this year, Kraken also announced its plan to integrate the network. Other U.S. based exchanges, such as Coinbase and Gemini, do not currently support the network.

The Encrypt Coin is Listing on Hillsu

The price of the Encrypt Coin otherwise known as "ECPC," has continued to skyrocket in value capturing fresh new price highs; it skyrocketed more than 160% in the past week amid the overall momentum happening across the crypto markets.

The notion that a quantum computer might someday break bitcoin is quickly gaining ground. That is because quantum computers are becoming powerful enough to factor large prime numbers, a critical component of bitcoin's public key cryptography. Within a decade, quantum computing is expected to be able to hack into cell phones, bank accounts, email addresses, and bitcoin wallets.

Quantum cryptography, also called quantum encryption is used in Encrypt Coin; it applies the principles of quantum mechanics to encrypt messages in a way that is never read by anyone outside of the intended recipient. It takes advantage of quantum's multiple states, coupled with its "no change theory," which means it cannot be unknowingly interrupted. The Encrypt Coin aims to become the safest digital asset in the future.

Hillsu has developed rapidly. This cooperation has a great effect on ECPC's exposure and promotion. Hillsu can obtain a better development platform and strive to find more business opportunities inside and outside the industry, which has caused the price of ECPC to skyrocket.

Website: http://www.Hillsu.com

Media ContactContact: Leonard MCompany Name: Hillsu Technology co.,ltd.Website: http://hillsu.comEmail: cs@hillsu.com

To view the source version of this press release, please visit https://www.newsfilecorp.com/release/100457

Read the original:
Hillsu Debuts as a Public Crypto Exchange in the United States - StreetInsider.com