The Fundamental Flaw in Artificial Intelligence & Who Is Leading the AI Race? Artificial Human Intelligence vs. Real Machine Intelligence – BBN…

The Fundamental Flaw in Artificial Intelligence & Who Is Leading the AI Race? Artificial Human Intelligence vs. Real Machine Intelligence

Artificial intelligence is impacting every single aspect of our future, but it has a fundamental flaw that needs to be addressed.

The fundamental flaw of artificial intelligence is that it requires a skilled workforce. Apple is currently leading the race of artificial intelligence by acquiring 29 AI startups since 2010.

Success in creating effective AI, could be the biggest event in the history of our civilization. Or the worst. We just don't know. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it.

Stephen Hawking

Source: Reuters

Artificial intelligence is reduced to the following definitions:

1:a branch of computer science dealing with the simulation of intelligent behavior in computers; the capability of a machine to imitate intelligent human behavior;

2: an area of computer science that deals with giving machines the ability to seem like they have human intelligence;

3:the ability of a digitalcomputeror computer-controlledrobotto perform tasks commonly associated with intelligent beings; systems endowed with theintellectualprocesses characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience;

4: system that perceives its environment and takes actions that maximize its chance of achieving its goals;

5: machines that mimic cognitive functions that humans associate with thehuman mind, such as learning and problem solving.

Source: Deloitte

The purpose of artificial intelligence isto enable computers and machines to perform intellectual taskssuch as problem solving, decision making, perception, and understanding human communication.

In fact, today's AI is not copying human brains, mind, intelligence, cognition, or behavior. It is all about advanced hardware, software and dataware, information processing technology, big data collection, big computing power. As it is rightly noted at the Financial Times Future Forum The Impact of Artificial Intelligence on Business and Society:Machines will outperform us not by copying us but by harnessing the combination of colossal quantities of data, massive processing power and remarkable algorithms.

They are advanced data-processing systems: weak or narrow AI applications, neural networks, machine learning, deep learning, multiple linear regression, RFM modeling, cognitive computing, predictive intelligence/analytics, language models, or knowledge graphs. Be it cognitive APIs (face, speech, text etc.),the Microsoft Azure AI platform, web searches or self-driving transportation, GPT-3-4-5 or BERT, Microsoft' KG, Google's KG orDiffbot, training their knowledge graph on the entire internet, encoding entities like people, places and objects into nodes, connected to other entities via edges.

Source: DZone

Today's"AI is meaningless" and "often just a fancy name for a computer program", software patches, like bug fixes, to legacy software or big databases to improve their functionality,security, usability, orperformance.

Such machines are not yet self-aware and they cannot understand context, especially in language. Operationally, too, they are limited by the historical data from which they learn, and restricted to functioning within set parameters.

Lucy Colback

Todays artificial intelligence (AI) is limited. It still hasa long way to go.

Artificial intelligence can be duped by scenarios it has never seen before.

With AI playing an increasingly major role in modern software and services, each major tech firm is battling to develop robust machine-learning technology for use in-house and to sell to the public via cloud services.

However most of the tech companies are still struggling to unlock the real power of artificial intelligence.

Today's artificial intelligence is at best narrow.Narrow artificial intelligence is what we see all around us in computers today -- intelligent systems that have been taught or have learned how to carry out specific tasks without being explicitly programmed how to do so.

Acording to CB Insights, artificial intelligence companies are a prime acquisition target for companies looking to leverage AI tech without building it from scratch. In the race for AI, this is who's leading the charge.

The usual suspects are leading the race for AI: tech giants like Facebook, Amazon, Microsoft, Google, and Apple (FAMGA) have all been aggressively acquiring AI startups for the last decade.

Among FAMGA, Apple leads the way. With 29 total AI acquisitions since 2010, the company has made nearly twice as many acquisitions as second-place Google (the frontrunner from 2012 to 2016), with 15 acquisitions.

Apple and Google are followed by Microsoft with 13 acquisitions, Facebook with 12, and Amazon with 7.

Source: CB Insights

Apples AI acquisition spree, which has helped it overtake Google in recent years, has been essential to the development of new iPhone features. For example, FaceID, the technology that allows users to unlock their iPhones by looking at them, stems from Apples M&A movesin chips and computer vision, including the acquisition of AI companyRealFace.

In fact, many of FAMGAs prominent products and services such as Apples Siri or Googles contributions to healthcare through DeepMind came out ofacquisitions of AI companies.

Other top acquirers include major tech players like Intel, Salesforce, Twitter, and IBM.

Source: Analytics Steps

Artificial Intelligence with robotics is poised to change our world from top to bottom, promising to help solve some of the worlds most pressing problems, from healthcare to economics to global crisis predictions and timely responses.

But while adopting and integrating and implementing AI technologies, as aDeloitte reportsays, around 94% of the enterprises face potential problems.

This article is not about the AI problems, such as the lack of technical know-how, data acquisition and storage, transfer learning, expensive workforce, ethical or legal challenges, big data addiction, computation speed, black box, narrow specialization, myths & expectations and risks, cognitive biases, or price factor. It is not our subject to discuss why small and mid-sized organizations struggle to adopt costly AI technologies, while big firms like Facebook, Apple, Microsoft, Google, Amazon, IBM allocate a separate budget for acquiring AI startups.

Instead, we focus on the AI itself, as the biggest issue, with its three fundamental problems looking for fundamental solutions in terms of Real Human-Machine Intelligence, as briefed below.

First, it is about AI philosophy, or rather lack of any philosophy, and blindly relying on observations and empirical data or statistics, its processes, algorithms, and inductive inferences, needing a large volume of big data as the fuel to train the model for the special tasks of the classifications and the predictions in very specific cases.

Second, today's AI is not a scientific AI that agrees with the rules, principles, and method of science. Todays AI is failing to deal with reality and its causality and mentality strictly following a scientific method of inquiry depending upon the reciprocal interaction of generalizations (hypothesis, laws, theories, and models) and observable/experimental data. Most ML models tuned and tweaked to best perform in labs fail to work in real settings of the real world at a wide range of different AI applications, from image recognition to natural language processing (NLP) to disease prediction due to data shift, under-specification or something else. The process used to build most ML models today cannot tell which models will work in the real world and which ones wont.

Third, extremeanthropomorphism in today's AI/ML/DL, "attributing distinctively human-like feelings, mental states, and behavioral characteristics to inanimate objects, animals, religious figures, the environment, and technological artifacts (from computational artifacts to robots)". Anthropomorphism permeates AI R & TD & D & D, making the very language of computer scientists, designers, and programmers, as "machine learning", which is not any human-like learning, "neural networks", which are not any biological neural networks, or "artificial intelligence", which is not any human-like intelligence. What entails the whole gamut of humanitarian issues, like AI ethics and morality, responsibility and trust, etc.

As a result, its trends are chaotic, sporadic and unsystematic, as theGartner Hype Cycle for Artificial Intelligence 2021demonstrates.

Source: Gartner

In consequence, there is no common definition of AI, and each one sees AI in its own way, mostly marked by an extreme anthropomorphism replacing real machine intelligence (RMI) with artificial human intelligence (AHI).

Source: Econolytics

Generally, there are two groups of ML/AI researchers, AI specialists and ML generalists.

Most AI folks are narrow specialists, 99.999%, involved with different aspects of the Artificial Human Intelligence (AHI), where AI is about programming human brains/mind/intelligence/behavior in computing machines or robots.

Artificial Human Intelligence (AHI) is sometimes defined as the ability of a machine to perform cognitive functions we associate with human minds, such as perceiving, reasoning, learning, interacting with the environment, problem solving, and even exercising creativity.

The EC High-Level Expert Group on artificial intelligence has formulated its own specific behaviorist definition.

Artificial intelligence (AI) refers to systems that display intelligent behaviour by analysing their environment and taking actions with some degree of autonomy to achieve specific goals

Artificial intelligence (AI) refers to systems designed by humans that, given a complex goal, act in the physical or digital world by perceiving their environment, interpreting the collected structured or unstructured data, reasoning on the knowledge derived from this data and deciding the best action(s) to take (according to predefined parameters) to achieve the given goal. AI systems can also be designed to learn to adapt their behaviour by analysing how the environment is affected by their previous actions''.

In all, the AHI is fragmented as in:

Very few of MI/AI researchers (or generalists), 00.0001%, know that Real MI is about programming reality models and causal algorithms in computing machines or robots.

The first group lives on the anthropomorphic idea of AHI of ML, DL and NNs, dubbed as a narrow, weak, strong or general, superhuman or superintelligent AI, or Fake AI simply. Its machine learning models are built on the principle of statisticalinduction: inferring patterns from specific observations, doing statistical generalization from observations or acquiring knowledge from experience.

This inductive approach is useful for building tools for specific tasks on well-defined inputs; analyzing satellite imagery, recommending movies, and detecting cancerous cells, for example. But induction is incapable of the general-purpose knowledge creation exemplified by the human mind. Humans develop general theories about the world, often about things of which weve had no direct experience.

Whereas induction implies that you can only know what you observe, many of our best ideas dont come from experience. Indeed, if they did, we could never solve novel problems, or create novel things. Instead, we explain the inside of stars, bacteria, and electric fields; we create computers, build cities, and change nature feats of human creativity and explanation, not mere statistical correlation and prediction.

The second advances a true and real AI, which is programming general theories about the world, instead of cognitive functions and human actions, dubbed as the real-world AI, or Transdisciplinary AI, the Trans-AI simply.

To summarize the hardest ever problem, the philosophical and scientific definitions of AI are of two polar types, subjective, human-dependent, and anthropomorphic vs. objective, scientific and reality-related.

So, we have a critical distinction, AHI vs. Real AI, and should choose and follow the true way.

Todays narrow AI advances are due to the computing brute force: the rise of big data combined with the emergence of powerful graphics processing units (GPUs) for complex computations and the re-emergence of a decades-old AI computation modelthe compute-hungry machine deep learning. Its proponents are now looking for a new equation for future AI innovation, that includes the advent of small data, more efficient deep learning models, deep reasoning, new AI hardware, such as neuromorphic chips or quantum computers, and progress toward unsupervised self-learning and transfer learning.

Ultimately, researchers hope to create future AI systems that do more than mimic human thought patterns like reasoning and perceptionthey see it performing an entirely new type of thinking. While this might not happen in the very next wave of AI innovation, its in the sights of AI thought leaders.

Considering the existential value of AI Science and Technology, we must be absolutely honest and perfectly fair here.

Todays AI is hardly any real and true AI, if you automate the statistical generalization from observations, with data pattern matching, statistical correlations, and interpolations (predictions), as the AI4EU is promoting.

Todays AI is narrow. Applying trained models to new challenges requires an immense amount of new data training, and time. We need AI that combines different forms of knowledge, unpacks causal relationships, and learns new things on its own.

Such a defective AI can only compute what it observes being fed with its training data, for very special tasks on well-defined inputs: blindly text translating, analyzing satellite imagery, recommending movies, or detecting cancerous cells, for example. By the very design it is incapable of general-purpose knowledge creation, where the beauty of intelligence is sitting.

Their machine learning models are built on the principle ofinduction: inferring patterns from specific observations or acquiring knowledge from experience, focused on big-data the more observations, the better the model. They have to feed their statistical algorithm millions of labelled pictures of cats, or millions of games of chess to reach the best prediction accuracy.

As the article,The False Philosophy Plaguing AI,wisely noted:

In fact, most of science involves the search for theories which explain the observed by the unobserved. We explain apples falling with gravitational fields, mountains with continental drift, disease transmission with germs. Meanwhile, current AI systems are constrained by what they observe, entirely unable to theorize about the unknown.

Again, no big data can lead you to a general principle, law, theory, or fundamental knowledge. That is the damnation of induction, be it mathematical or logical or experimental.

Due to lack of a deep conceptual foundation, todays AI is closely associated with its logical consequences,AI will automate entirety and remove people out of work,AI is totally a science-fiction based technology, orRobots will command the world?It is misrepresented as thetop five myths about Artificial Intelligence:

That means we need the true, real and scientific AI, not AHI, as the Real-World Machine Intelligence and Learning, or the Trans-AI, simulating and modeling reality, physically, mental or virtual, with its causality and mentality, as reflected in the real superintelligence (RSI).

Last not last, the transdisciplinary technology is S. Hawkings called effective and human-friendly AI and what the Googles founder is dreaming aboutAI would be the ultimate version of Google. The ultimate search engine would understand everything on the web. It would understand exactly what you wanted, and it would give you the right thing. Larry Page

Our approach to artificial intelligence is fundamentally wrong by not training and developing a skilled workforce capable of handling AI. Weve thought about AI the wrong way by focusing on algorithms instead of finding solutions to make AI better and unbiased.

Artificial intelligence has to be optimized based on human preferences so that it solves real problems. Apple is currently leading the race but it's a very competitive battle. American and Chinese tech companies are ahead of European tech companies when it comes to artificial intelligence.

A lot of work will need to be done to avoid the negative consequences of artificial intelligence especially with the adventof artificial superintelligence. The sooner we begin regulating artificial intelligence, the better equipped we will be to mitigate and manage the dark side of artificial intelligence.

Transdisciplinary artificial intelligence as a responsible global man-machine intelligence has all potential to help solve several problems related to AI and consequently improve the lives of billions.

Original post:
The Fundamental Flaw in Artificial Intelligence & Who Is Leading the AI Race? Artificial Human Intelligence vs. Real Machine Intelligence - BBN...

The impact of artificial intelligence on human society and …

Tzu Chi Med J. 2020 Oct-Dec; 32(4): 339343.

Department of Medical Sociology and Social Work, College of Medicine, Chung Shan Medical University, Taichung, Taiwan

Department of Medical Sociology and Social Work, College of Medicine, Chung Shan Medical University, Taichung, Taiwan

Received 2019 Dec 19; Revised 2020 Jan 30; Accepted 2020 Apr 9.

This is an open access journal, and articles are distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 License, which allows others to remix, tweak, and build upon the work non-commercially, as long as appropriate credit is given and the new creations are licensed under the identical terms.

Artificial intelligence (AI), known by some as the industrial revolution (IR) 4.0, is going to change not only the way we do things, how we relate to others, but also what we know about ourselves. This article will first examine what AI is, discuss its impact on industrial, social, and economic changes on humankind in the 21st century, and then propose a set of principles for AI bioethics. The IR1.0, the IR of the 18th century, impelled a huge social change without directly complicating human relationships. Modern AI, however, has a tremendous impact on how we do things and also the ways we relate to one another. Facing this challenge, new principles of AI bioethics must be considered and developed to provide guidelines for the AI technology to observe so that the world will be benefited by the progress of this new intelligence.

KEYWORDS: Artificial intelligence, Bioethics, Principles of artificial intelligence bioethics

Artificial intelligence (AI) has many different definitions; some see it as the created technology that allows computers and machines to function intelligently. Some see it as the machine that replaces human labor to work for men a more effective and speedier result. Others see it as a system with the ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation [1].

Despite the different definitions, the common understanding of AI is that it is associated with machines and computers to help humankind solve problems and facilitate working processes. In short, it is an intelligence designed by humans and demonstrated by machines. The term AI is used to describe these functions of human-made tool that emulates the cognitive abilities of the natural intelligence of human minds [2].

Along with the rapid development of cybernetic technology in recent years, AI has been seen almost in all our life circles, and some of that may no longer be regarded as AI because it is so common in daily life that we are much used to it such as optical character recognition or the Siri (speech interpretation and recognition interface) of information searching equipment on computer [3].

From the functions and abilities provided by AI, we can distinguish two different types. The first is weak AI, also known as narrow AI that is designed to perform a narrow task, such as facial recognition or Internet Siri search or self-driving car. Many currently existing systems that claim to use AI are likely operating as a weak AI focusing on a narrowly defined specific function. Although this weak AI seems to be helpful to human living, there are still some think weak AI could be dangerous because weak AI could cause disruptions in the electric grid or may damage nuclear power plants when malfunctioned.

The new development of the long-term goal of many researchers is to create strong AI or artificial general intelligence (AGI) which is the speculative intelligence of a machine that has the capacity to understand or learn any intelligent task human being can, thus assisting human to unravel the confronted problem. While narrow AI may outperform humans such as playing chess or solving equations, but its effect is still weak. AGI, however, could outperform humans at nearly every cognitive task.

Strong AI is a different perception of AI that it can be programmed to actually be a human mind, to be intelligent in whatever it is commanded to attempt, even to have perception, beliefs and other cognitive capacities that are normally only ascribed to humans [4].

In summary, we can see these different functions of AI [5,6]:

Automation: What makes a system or process to function automatically

Machine learning and vision: The science of getting a computer to act through deep learning to predict and analyze, and to see through a camera, analog-to-digital conversion and digital signal processing

Natural language processing: The processing of human language by a computer program, such as spam detection and converting instantly a language to another to help humans communicate

Robotics: A field of engineering focusing on the design and manufacturing of cyborgs, the so-called machine man. They are used to perform tasks for human's convenience or something too difficult or dangerous for human to perform and can operate without stopping such as in assembly lines

Self-driving car: Use a combination of computer vision, image recognition amid deep learning to build automated control in a vehicle.

Is AI really needed in human society? It depends. If human opts for a faster and effective way to complete their work and to work constantly without taking a break, yes, it is. However if humankind is satisfied with a natural way of living without excessive desires to conquer the order of nature, it is not. History tells us that human is always looking for something faster, easier, more effective, and convenient to finish the task they work on; therefore, the pressure for further development motivates humankind to look for a new and better way of doing things. Humankind as the homo-sapiens discovered that tools could facilitate many hardships for daily livings and through tools they invented, human could complete the work better, faster, smarter and more effectively. The invention to create new things becomes the incentive of human progress. We enjoy a much easier and more leisurely life today all because of the contribution of technology. The human society has been using the tools since the beginning of civilization, and human progress depends on it. The human kind living in the 21st century did not have to work as hard as their forefathers in previous times because they have new machines to work for them. It is all good and should be all right for these AI but a warning came in early 20th century as the human-technology kept developing that Aldous Huxley warned in his book Brave New World that human might step into a world in which we are creating a monster or a super human with the development of genetic technology.

Besides, up-to-dated AI is breaking into healthcare industry too by assisting doctors to diagnose, finding the sources of diseases, suggesting various ways of treatment performing surgery and also predicting if the illness is life-threatening [7]. A recent study by surgeons at the Children's National Medical Center in Washington successfully demonstrated surgery with an autonomous robot. The team supervised the robot to perform soft-tissue surgery, stitch together a pig's bowel, and the robot finished the job better than a human surgeon, the team claimed [8,9]. It demonstrates robotically-assisted surgery can overcome the limitations of pre-existing minimally-invasive surgical procedures and to enhance the capacities of surgeons performing open surgery.

Above all, we see the high-profile examples of AI including autonomous vehicles (such as drones and self-driving cars), medical diagnosis, creating art, playing games (such as Chess or Go), search engines (such as Google search), online assistants (such as Siri), image recognition in photographs, spam filtering, predicting flight delaysetc. All these have made human life much easier and convenient that we are so used to them and take them for granted. AI has become indispensable, although it is not absolutely needed without it our world will be in chaos in many ways today.

Questions have been asked: With the progressive development of AI, human labor will no longer be needed as everything can be done mechanically. Will humans become lazier and eventually degrade to the stage that we return to our primitive form of being? The process of evolution takes eons to develop, so we will not notice the backsliding of humankind. However how about if the AI becomes so powerful that it can program itself to be in charge and disobey the order given by its master, the humankind?

Let us see the negative impact the AI will have on human society [10,11]:

A huge social change that disrupts the way we live in the human community will occur. Humankind has to be industrious to make their living, but with the service of AI, we can just program the machine to do a thing for us without even lifting a tool. Human closeness will be gradually diminishing as AI will replace the need for people to meet face to face for idea exchange. AI will stand in between people as the personal gathering will no longer be needed for communication

Unemployment is the next because many works will be replaced by machinery. Today, many automobile assembly lines have been filled with machineries and robots, forcing traditional workers to lose their jobs. Even in supermarket, the store clerks will not be needed anymore as the digital device can take over human labor

Wealth inequality will be created as the investors of AI will take up the major share of the earnings. The gap between the rich and the poor will be widened. The so-called M shape wealth distribution will be more obvious

New issues surface not only in a social sense but also in AI itself as the AI being trained and learned how to operate the given task can eventually take off to the stage that human has no control, thus creating un-anticipated problems and consequences. It refers to AI's capacity after being loaded with all needed algorithm may automatically function on its own course ignoring the command given by the human controller

The human masters who create AI may invent something that is racial bias or egocentrically oriented to harm certain people or things. For instance, the United Nations has voted to limit the spread of nucleus power in fear of its indiscriminative use to destroying humankind or targeting on certain races or region to achieve the goal of domination. AI is possible to target certain race or some programmed objects to accomplish the command of destruction by the programmers, thus creating world disaster.

There are, however, many positive impacts on humans as well, especially in the field of healthcare. AI gives computers the capacity to learn, reason, and apply logic. Scientists, medical researchers, clinicians, mathematicians, and engineers, when working together, can design an AI that is aimed at medical diagnosis and treatments, thus offering reliable and safe systems of health-care delivery. As health professors and medical researchers endeavor to find new and efficient ways of treating diseases, not only the digital computer can assist in analyzing, robotic systems can also be created to do some delicate medical procedures with precision. Here, we see the contribution of AI to health care [7,11]:

IBM's Watson computer has been used to diagnose with the fascinating result. Loading the data to the computer will instantly get AI's diagnosis. AI can also provide various ways of treatment for physicians to consider. The procedure is something like this: To load the digital results of physical examination to the computer that will consider all possibilities and automatically diagnose whether or not the patient suffers from some deficiencies and illness and even suggest various kinds of available treatment.

Pets are recommended to senior citizens to ease their tension and reduce blood pressure, anxiety, loneliness, and increase social interaction. Now cyborgs have been suggested to accompany those lonely old folks, even to help do some house chores. Therapeutic robots and the socially assistive robot technology help improve the quality of life for seniors and physically challenged [12].

Human error at workforce is inevitable and often costly, the greater the level of fatigue, the higher the risk of errors occurring. Al technology, however, does not suffer from fatigue or emotional distraction. It saves errors and can accomplish the duty faster and more accurately.

AI-based surgical procedures have been available for people to choose. Although this AI still needs to be operated by the health professionals, it can complete the work with less damage to the body. The da Vinci surgical system, a robotic technology allowing surgeons to perform minimally invasive procedures, is available in most of the hospitals now. These systems enable a degree of precision and accuracy far greater than the procedures done manually. The less invasive the surgery, the less trauma it will occur and less blood loss, less anxiety of the patients.

The first computed tomography scanners were introduced in 1971. The first magnetic resonance imaging (MRI) scan of the human body took place in 1977. By the early 2000s, cardiac MRI, body MRI, and fetal imaging, became routine. The search continues for new algorithms to detect specific diseases as well as to analyze the results of scans [9]. All those are the contribution of the technology of AI.

The virtual presence technology can enable a distant diagnosis of the diseases. The patient does not have to leave his/her bed but using a remote presence robot, doctors can check the patients without actually being there. Health professionals can move around and interact almost as effectively as if they were present. This allows specialists to assist patients who are unable to travel.

Despite all the positive promises that AI provides, human experts, however, are still essential and necessary to design, program, and operate the AI from any unpredictable error from occurring. Beth Kindig, a San Francisco-based technology analyst with more than a decade of experience in analyzing private and public technology companies, published a free newsletter indicating that although AI has a potential promise for better medical diagnosis, human experts are still needed to avoid the misclassification of unknown diseases because AI is not omnipotent to solve all problems for human kinds. There are times when AI meets an impasse, and to carry on its mission, it may just proceed indiscriminately, ending in creating more problems. Thus vigilant watch of AI's function cannot be neglected. This reminder is known as physician-in-the-loop [13].

The question of an ethical AI consequently was brought up by Elizabeth Gibney in her article published in Nature to caution any bias and possible societal harm [14]. The Neural Information processing Systems (NeurIPS) conference in Vancouver Canada in 2020 brought up the ethical controversies of the application of AI technology, such as in predictive policing or facial recognition, that due to bias algorithms can result in hurting the vulnerable population [14]. For instance, the NeurIPS can be programmed to target certain race or decree as the probable suspect of crime or trouble makers.

Bioethics is a discipline that focuses on the relationship among living beings. Bioethics accentuates the good and the right in biospheres and can be categorized into at least three areas, the bioethics in health settings that is the relationship between physicians and patients, the bioethics in social settings that is the relationship among humankind and the bioethics in environmental settings that is the relationship between man and nature including animal ethics, land ethics, ecological ethicsetc. All these are concerned about relationships within and among natural existences.

As AI arises, human has a new challenge in terms of establishing a relationship toward something that is not natural in its own right. Bioethics normally discusses the relationship within natural existences, either humankind or his environment, that are parts of natural phenomena. But now men have to deal with something that is human-made, artificial and unnatural, namely AI. Human has created many things yet never has human had to think of how to ethically relate to his own creation. AI by itself is without feeling or personality. AI engineers have realized the importance of giving the AI ability to discern so that it will avoid any deviated activities causing unintended harm. From this perspective, we understand that AI can have a negative impact on humans and society; thus, a bioethics of AI becomes important to make sure that AI will not take off on its own by deviating from its originally designated purpose.

Stephen Hawking warned early in 2014 that the development of full AI could spell the end of the human race. He said that once humans develop AI, it may take off on its own and redesign itself at an ever-increasing rate [15]. Humans, who are limited by slow biological evolution, could not compete and would be superseded. In his book Superintelligence, Nick Bostrom gives an argument that AI will pose a threat to humankind. He argues that sufficiently intelligent AI can exhibit convergent behavior such as acquiring resources or protecting itself from being shut down, and it might harm humanity [16].

The question isdo we have to think of bioethics for the human's own created product that bears no bio-vitality? Can a machine have a mind, consciousness, and mental state in exactly the same sense that human beings do? Can a machine be sentient and thus deserve certain rights? Can a machine intentionally cause harm? Regulations must be contemplated as a bioethical mandate for AI production.

Studies have shown that AI can reflect the very prejudices humans have tried to overcome. As AI becomes truly ubiquitous, it has a tremendous potential to positively impact all manner of life, from industry to employment to health care and even security. Addressing the risks associated with the technology, Janosch Delcker, Politico Europe's AI correspondent, said: I don't think AI will ever be free of bias, at least not as long as we stick to machine learning as we know it today,. What's crucially important, I believe, is to recognize that those biases exist and that policymakers try to mitigate them [17]. The High-Level Expert Group on AI of the European Union presented Ethics Guidelines for Trustworthy AI in 2019 that suggested AI systems must be accountable, explainable, and unbiased. Three emphases are given:

Lawful-respecting all applicable laws and regulations

Ethical-respecting ethical principles and values

Robust-being adaptive, reliable, fair, and trustworthy from a technical perspective while taking into account its social environment [18].

Seven requirements are recommended [18]:

AI should not trample on human autonomy. People should not be manipulated or coerced by AI systems, and humans should be able to intervene or oversee every decision that the software makes

AI should be secure and accurate. It should not be easily compromised by external attacks, and it should be reasonably reliable

Personal data collected by AI systems should be secure and private. It should not be accessible to just anyone, and it should not be easily stolen

Data and algorithms used to create an AI system should be accessible, and the decisions made by the software should be understood and traced by human beings. In other words, operators should be able to explain the decisions their AI systems make

Services provided by AI should be available to all, regardless of age, gender, race, or other characteristics. Similarly, systems should not be biased along these lines

AI systems should be sustainable (i.e., they should be ecologically responsible) and enhance positive social change

AI systems should be auditable and covered by existing protections for corporate whistleblowers. The negative impacts of systems should be acknowledged and reported in advance.

From these guidelines, we can suggest that future AI must be equipped with human sensibility or AI humanities. To accomplish this, AI researchers, manufacturers, and all industries must bear in mind that technology is to serve not to manipulate humans and his society. Bostrom and Judkowsky listed responsibility, transparency, auditability, incorruptibility, and predictability [19] as criteria for the computerized society to think about.

Nathan Strout, a reporter at Space and Intelligence System at Easter University, USA, reported just recently that the intelligence community is developing its own AI ethics. The Pentagon made announced in February 2020 that it is in the process of adopting principles for using AI as the guidelines for the department to follow while developing new AI tools and AI-enabled technologies. Ben Huebner, chief of the Office of Director of National Intelligence's Civil Liberties, Privacy, and Transparency Office, said that We're going to need to ensure that we have transparency and accountability in these structures as we use them. They have to be secure and resilient [20]. Two themes have been suggested for the AI community to think more about: Explainability and interpretability. Explainability is the concept of understanding how the analytic works, while interpretability is being able to understand a particular result produced by an analytic [20].

All the principles suggested by scholars for AI bioethics are well-brought-up. I gather from different bioethical principles in all the related fields of bioethics to suggest four principles here for consideration to guide the future development of the AI technology. We however must bear in mind that the main attention should still be placed on human because AI after all has been designed and manufactured by human. AI proceeds to its work according to its algorithm. AI itself cannot empathize nor have the ability to discern good from evil and may commit mistakes in processes. All the ethical quality of AI depends on the human designers; therefore, it is an AI bioethics and at the same time, a trans-bioethics that abridge human and material worlds. Here are the principles:

Beneficence: Beneficence means doing good, and here it refers to the purpose and functions of AI should benefit the whole human life, society and universe. Any AI that will perform any destructive work on bio-universe, including all life forms, must be avoided and forbidden. The AI scientists must understand that reason of developing this technology has no other purpose but to benefit human society as a whole not for any individual personal gain. It should be altruistic, not egocentric in nature

Value-upholding: This refers to AI's congruence to social values, in other words, universal values that govern the order of the natural world must be observed. AI cannot elevate to the height above social and moral norms and must be bias-free. The scientific and technological developments must be for the enhancement of human well-being that is the chief value AI must hold dearly as it progresses further

Lucidity: AI must be transparent without hiding any secret agenda. It has to be easily comprehensible, detectable, incorruptible, and perceivable. AI technology should be made available for public auditing, testing and review, and subject to accountability standards In high-stakes settings like diagnosing cancer from radiologic images, an algorithm that can't explain its work may pose an unacceptable risk. Thus, explainability and interpretability are absolutely required

Accountability: AI designers and developers must bear in mind they carry a heavy responsibility on their shoulders of the outcome and impact of AI on whole human society and the universe. They must be accountable for whatever they manufacture and create.

AI is here to stay in our world and we must try to enforce the AI bioethics of beneficence, value upholding, lucidity and accountability. Since AI is without a soul as it is, its bioethics must be transcendental to bridge the shortcoming of AI's inability to empathize. AI is a reality of the world. We must take note of what Joseph Weizenbaum, a pioneer of AI, said that we must not let computers make important decisions for us because AI as a machine will never possess human qualities such as compassion and wisdom to morally discern and judge [10]. Bioethics is not a matter of calculation but a process of conscientization. Although AI designers can up-load all information, data, and programmed to AI to function as a human being, it is still a machine and a tool. AI will always remain as AI without having authentic human feelings and the capacity to commiserate. Therefore, AI technology must be progressed with extreme caution. As Von der Leyen said in White Paper on AI A European approach to excellence and trust: AI must serve people, and therefore, AI must always comply with people's rights. High-risk AI. That potentially interferes with people's rights has to be tested and certified before it reaches our single market [21].

Nil.

There are no conflicts of interest.

12. Scoping study on the emerging use of Artificial Intelligence (AI) and robotics in social care published by Skills for Care. [Last accessed on 2019 Aug 15]. Available from: wwwskillsforcareorguk .

See original here:
The impact of artificial intelligence on human society and ...

What is Artificial Intelligence (AI)? | Oracle

Despite AIs promise, many companies are not realizing the full potential of machine learning and other AI functions. Why? Ironically, it turns out that the issue is, in large part...people. Inefficient workflows can hold companies back from getting the full value of their AI implementations.

For example, data scientists can face challenges getting the resources and data they need to build machine learning models. They may have trouble collaborating with their teammates. And they have many different open source tools to manage, while application developers sometimes need to entirely recode models that data scientists develop before they can embed them into their applications.

With a growing list of open source AI tools, IT ends up spending more time supporting the data science teams by continuously updating their work environments. This issue is compounded by limited standardization across how data science teams like to work.

Finally, senior executives might not be able to visualize the full potential of their companys AI investments. Consequently, they dont lend enough sponsorship and resources to creating the collaborative and integrated ecosystem required for AI to be successful.

View post:
What is Artificial Intelligence (AI)? | Oracle

8 Examples of Artificial Intelligence in our Everyday Lives

Main Examples of Artificial Intelligence Takeaways:

The words artificial intelligence may seem like a far-off concept that has nothing to do with us. But the truth is that we encounter several examples of artificial intelligence in our daily lives.

From Netflixs movie recommendation to Amazons Alexa, we now rely on various AI models without knowing it. In this post, well consider eight examples of how were already using artificial intelligence.

Artificial intelligence is an expansive branch of computer science that focuses on building smart machines. Thanks to AI, these machines can learn from experience, adjust to new inputs, and perform human-like tasks. For example, chess-playing computers and self-driving cars rely heavily on natural language processing and deep learning to function.

American computer scientist John McCarthy coined the term artificial intelligence back in 1956. At the time, McCarthy only created the term to distinguish the AI field from cybernetics.

However, AI is more popular than ever today due to:

Hollywood movies tend to depict artificial intelligence as a villainous technology that is destined to take over the world.

One example is the artificial superintelligence system, Skynet, from the film franchise Terminator. Theres also VIKI, an AI supercomputer from the movie I, Robot, who deemed that humans cant be trusted with their own survival.

Holywood has also depicted AI as superintelligent robots, like in movies I Am Mother and Ex Machina.

However, the current AI technologies are not as sinister or quite as advanced. With that said, these depictions raise an essential question:

No, not exactly. Artificial intelligence and robotics are two entirely separate fields. Robotics is a technology branch that deals with physical robots programmable machines designed to perform a series of tasks. On the other hand, AI involves developing programs to complete tasks that would otherwise require human intelligence. However, the two fields can overlap to create artificially intelligent robots.

Most robots are not artificially intelligent. For example, industrial robots are usually programmed to perform the same repetitive tasks. As a result, they typically have limited functionality.

However, introducing an AI algorithm to an industrial robot can enable it to perform more complex tasks. For instance, it can use a path-finding algorithm to navigate around a warehouse autonomously.

To understand how thats possible, we must address another question:

The four artificial intelligence types are reactive machines, limited memory, Theory of Mind, and self-aware. These AI types exist as a type of hierarchy, where the simplest level requires basic functioning, and the most advanced level is well, all-knowing. Other subsets of AI include big data, machine learning, and natural language processing.

The simplest types of AI systems are reactive. They can neither learn from experiences nor form memories. Instead, reactive machines react to some inputs with some output.

Examples of artificial intelligence machines in this category include Googles AlphaGo and IBMs chess-playing supercomputer, Deep Blue.

Deep Blue can identify chess pieces and knows how each of them moves. While the machine can choose the most optimal move from several possibilities, it cant predict the opponents moves.

A reactive machine doesnt rely on an internal concept of the world. Instead, it perceives the world directly and acts on what it sees.

Limited memory refers to an AIs ability to store previous data and use it to make better predictions. In other words, these types of artificial intelligence can look at the recent past to make immediate decisions.

Note that limited memory is required to create every machine learning model. However, the model can get deployed as a reactive machine type.

The three significant examples of artificial intelligence in this category are:

Self-driving cars are limited memory AI that makes immediate decisions using data from the recent past.

For example, self-driving cars use sensors to identify steep roads, traffic signals, and civilians crossing the streets. The vehicles can then use this information to make better driving decisions and avoid accidents.

In Psychology, theory of mind refers to the ability to attribute mental state beliefs, intent, desires, emotion, knowledge to oneself and others. Its the fundamental reason we can have social interactions.

Unfortunately, were yet to reach the Theory of Mind artificial intelligence type. Although voice assistants exhibit such capabilities, its still a one-way relationship.

For example, you could yell angrily at Google Maps to take you in another direction. However, itll neither show concern for your distress nor offer emotional support. Instead, the map application will return the same traffic report and ETA.

An AI system with Theory of Mind would understand that humans have thoughts, feelings, and expectations for how to be treated. That way, it can adjust its response accordingly.

The final step of AI development is to build self-aware machines that can form representations of themselves. Its an extension and advancement of the Theory of Mind AI.

A self-aware machine has human-level consciousness, with the ability to think, desire, and understand its feelings. At the moment, these types of artificial intelligence only exist in movies and comic book pages. Self-aware machines do not exist.

Although self-aware machines are still decades away, several artificial intelligence examples already exist in our everyday lives.

Several examples of artificial intelligence impact our lives today. These include FaceID on iPhones, the search algorithm on Google, and the recommendation algorithm on Netflix. Youll also find other examples of how AI is in use today on social media, digital assistants like Alexa, and ride-hailing apps such as Uber.

Virtual filters on Snapchat and the FaceID unlock on iPhones are two examples of AI applications today. While the former uses face detection technology to identify any face, the latter relies on face recognition.

So, how does it work?

The TrueDepth camera on the Apple devices projects over 30,000 invisible dots to create a depth map of your face. It also captures an infrared image of the users face.

After that, a machine learning algorithm compares the scan of your face with what a previously enrolled facial data. That way, it can determine whether to unlock the device or not.

According to Apple, FaceID automatically adapts to changes in the users appearance. These include wearing cosmetic makeup, growing facial hair, or wearing hats, glasses, or contact lens.

The Cupertino-based tech giant also stated that the chance of fooling FaceID is one in a million.

Several text editors today rely on artificial intelligence to provide the best writing experience.

For example, document editors use an NLP algorithm to identify incorrect grammar usage and suggest corrections. Besides auto-correction, some writing tools also provide readability and plagiarism grades.

However, editors such as INK took AI usage a bit further to provide specialized functions. It uses artificial intelligence to offer smart web content optimization recommendations.

Just recently, INK has released a study showing how its AI-powered writing platform can improve content relevance and help drive traffic to sites. You can read their full study here.

Social media platforms such as Facebook, Twitter, and Instagram rely heavily on artificial intelligence for various tasks.

Currently, these social media platforms use AI to personalize what you see on your feeds. The model identifies users interests and recommends similar content to keep them engaged.

Also, researchers trained AI models to recognize hate keywords, phrases, and symbols in different languages. That way, the algorithm can swiftly take down social media posts that contain hate speech.

Other examples of artificial intelligence in social media include:

Plans for social media platform involve using artificial intelligence to identify mental health problems. For example, an algorithm could analyze content posted and consumed to detect suicidal tendencies.

Getting queries directly from a customer representative can be very time-consuming. Thats where artificial intelligence comes in.

Computer scientists train chat robots or chatbots to impersonate the conversational styles of customer representatives using natural language processing.

Chatbots can now answer questions that require a detailed response in place of a specific yes or no answer. Whats more, the bots can learn from previous bad ratings to ensure maximum customer satisfaction.

As a result, machines now perform basic tasks such as answering FAQs or taking and tracking orders.

Media streaming platforms such as Netflix, YouTube, and Spotify rely on a smart recommendation system thats powered by AI.

First, the system collects data on users interests and behavior using various online activities. After that, machine learning and deep learning algorithms analyze the data to predict preferences.

Thats why youll always find movies that youre likely to watch on Netflixs recommendation. And you wont have to search any further.

Search algorithms ensure that the top results on the search engine result page (SERP) have the answers to our queries. But how does this happen?

Search companies usually include some type of quality control algorithm to recognize high-quality content. It then provides a list of search results that best answer the query and offers the best user experience.

Since search engines are made entirely of codes, they rely on natural language processing (NLP) technology to understand queries.

Last year, Google announced Bidirectional Encoder Representations from Transformers (BERT), an NLP pre-training technique. Now, the technology powers almost all English-based query on Google Search.

In October 2011, Apples Siri became the first digital assistant to be standard on a smartphone. However, voice assistants have come a long way since then.

Today, Google Assistant incorporates advanced NLP and ML to become well-versed in human language. Not only does it understand complex commands, but it also provides satisfactory outputs.

Also, digital assistants now have adaptive capabilities for analyzing user preferences, habits, and schedules. That way, they can organize and plan actions such as reminders, prompts, and schedules.

Various smart home devices now use AI applications to conserve energy.

For example, smart thermostats such as Nest use our daily habits and heating/cooling preferences to adjust home temperatures. Likewise, smart refrigerators can create shopping lists based on whats absent on the fridges shelves.

The way we use artificial intelligence at home is still evolving. More AI solutions now analyze human behavior and function accordingly.

We encounter AI daily, whether youre surfing the internet or listening to music on Spotify.

Other examples of artificial intelligence are visible in smart email apps, e-commerce, smart keyboard apps, as well as banking and finance. Artificial intelligence now plays a significant role in our decisions and lifestyle.

The media may have portrayed AI as a competition to human workers or a concept thatll eventually take over the world. But thats not the case.

Instead, artificial intelligence is helping humans become more productive and helping us live a better life.

Read more from the original source:
8 Examples of Artificial Intelligence in our Everyday Lives

AI in Robotics: Robotics and Artificial Intelligence 2021 – Datamation

Artificial intelligence (AI) is driving the robotics market into various areas, including mobile robots on the factory floor, robots that can do a large number of tasks rather than being specialized on one, and robots that can stay in control of inventory levels as well as fetching orders for delivery.

Such advanced functionality has raised the complexity of robotics. Hence the need for AI.

Artificial intelligence provides the ability to monitor many parameters in real-time and make decisions. For example, in an inventory robot, the machine has to be able to know its own location, the location of all stock, know stock levels, work out the sequence to go and retrieve items for orders, know the location of other robots on the floor, be able to navigate around the site, know when a human is near and change course, take deliveries to shipping, keep track of everything, and more.

The mobile robot also has to interoperate with various shop floor systems, computer numerical control (CNC) equipment, and other industrial systems. AI helps all those disparate systems work together seamlessly by being able to process their various inputs in real-time and coordinate action.

The autonomous robotic market alone is worth around $103 billion this year, according to Rob Enderle, an analyst at Enderle Group. He predicts that it will more than double by 2025 to $210 billion.

It will only go vertical from there, Enderle said.

Thats only one portion of the market. Another hot area is robotic process automation (RPA). It, too, is being integrated with AI to deal with high-volume, repeatable tasks. By handing these tasks over to robots, labor costs are reduced, workflows can be streamlined, and assembly processes are accelerated. Software can be written, for example, to take care of routine queries, calculations, and record keeping.

Historically, two different teams were needed: one for robotics and another for factory automation. The robotics team consists of specialized technicians with their own programming language to deal with the complex kinematics of multi-axis robots. Factory automation engineers, on the other hand, use programmable logic controllers (PLCs) and shop floor systems that utilize different programming languages. But software is now on the market that brings these two worlds together.

Further, better software and more sophisticated hardware has opened the door to a whole new breed of robot. While basic models operate on two axes, the latest breed of robotic machine with AI is capable of movement on six axes. They can be programmed to either carry out one task, over and over with high accuracy and speed, or execute complex tasks, such as coating or machining intricate components.

See more: Artificial Intelligence Market

Hondas ASIMO has become something of a celebrity. This advanced humanoid robot has been programmed to walk like a human, maintain balance, and do backflips.

But now AI is being used to advance its capabilities with an eventual view toward autonomous motion.

The difficulty is no longer building the robot but training it to deal with unstructured environments, like roads, open areas, and building interiors, Enderle said.They are complex systems with massive numbers of actuators and sensors to move and perceive what is around them.

Sight Machine, the developer of a manufacturing data platform, has partnered with Nissan to use AI to perform anomaly detection on 300 robots working on an automated final assembly process.

This system provides predictions and root-cause analysis for downtime.

See more: Artificial Intelligence: Current and Future Trends

Siemens and AUTOParkit have formed a partnership to bring parking into the 21st century.

Using Siemens automation controls with AI, the AUTOParkit solution provides a safe valet service without the valet.

This fully automated parking solution can achieve 2:1 efficiency over a conventional parking approach, AUTOParkit says. It reduces parking-related fuel consumption by 83% and carbon emissions by 82%.

In such a complex system, specialized vehicle-specific hardware and software work together to provide smooth and seamless parking experience that is far faster than traditional parking. Siemens controls use AI to pull it all together.

Kawasaki has a large offering of robots that are primarily used in fixed installations. But now it is working on robotic mobility and that takes AI.

For stationary robots to work seamlessly with mobile robots, it is essential that they can exchange information accurately and without failure, said Samir Patel, senior director of robotics engineering, Kawasaki Robotics USA.

To meet such integration requirements, Kawasaki robot controllers offer numerous options, including EtherNet TCP/IP, EtherNet IP, EtherCat, PROFIBUS, PROFINET and DeviceNet. These options not only allow our robots to communicate with mobile robots, but also allow communication to supervisory servers, PLCs, vision systems, sensors, and other devices.

With so many data sources to communicate with and instantaneous response needed to provide operational efficiency and maintain safety, AI is needed.

Over time, each robot accumulates data, such as joint load, speed, temperature, and cycle count, which periodically gets transferred to the network server, Patel said. In turn. the server running an application, such as Kawasakis Trend Manager, can analyze the data for performance and failure prediction.

Sight Machine, in close cooperation with Komatsu, has developed a system that can rapidly analyze 500 million data points from 600 welding robots.

The AI-based system can provide early warning of potential downtime and other welding faults.

See more: Top Performing Artificial Intelligence Companies

Read this article:
AI in Robotics: Robotics and Artificial Intelligence 2021 - Datamation

Former Pentagon official says China has won artificial intelligence battle | TheHill – The Hill

The Pentagon's former software chief resigned and said that China is headed toward global dominance in artificial intelligence due to the relatively slow pace of innovationin the United States.

"We have no competing fighting chance against China in 15 to 20 years. Right now, its already a done deal; it is already over in my opinion," the Pentagon's former software chief, Nick Chaillan, told the Financial Times, adding that some of the U.S.'s cyber defense systems wereat "kindergarten level."

Chaillanannounced his resignation last month as an act of protest against the United States' slow pace of tech development.Chaillan saidAmerica's failure toaggressively pursue AI capacity was putting the nation at risk, according toReuters.

In the next decade, Western intelligence reportspredictChinawill dominate with many emerging technologies like AI, synthetic biology and genetics, Reutersreported.

Chaillan also attributed the sluggish pace to companies like Google hesitating to work with the government on AI andongoing debates about AI ethics in the U.S., while China pushes forward without consideration for the potential ethical consequences.

"Google is proud to work with the U.S. government, and we have many projects underway today, including with the Department of Defense, Department of Energy, and the NIH," a Google Cloud spokesperson said in a statement to The Hill. "We are committed to continuing to partner with the U.S. government, including the military, both on specific projects and on broader policy around AI that are consistent with our principles."

Meanwhile, Secretary of Defense Lloyd J. Austin III in July recognized that "China is our pacing challenge" when it comes to AI development.

"Were going to compete to win, but were going to do it the right way,"Austin said."Were not going to cut corners on safety, security, or ethics."

Ina LinkedIn post announcing his departure on Sept. 2, Chaillan insisted that the U.S. could not "afford to be behind."

"If the US cant match the booming, hardworking population in China, then we have to win by being smarter, more efficient, and forward-leaning through agility, rapid prototyping and innovation. We have to be ahead and lead."

Chaillan was also critical of the Department of Defense and its decisions to put people with limitedIT experience in leadership roles over software programs.

"The DoD should stop pretending they want industry folks to come and help if they are not going to let them do the work. While we wasted time in bureaucracy, our adversaries moved further ahead," Chaillan said.

"I will always feel some guilt or regret in leaving. I have this sinking feeling that I am letting our warfighters, the teams, and my children down by not continuing to fight for a better outcome 20 years from now,"Chaillan added of his departure.

Updated at 2:28 p.m.

Link:
Former Pentagon official says China has won artificial intelligence battle | TheHill - The Hill

NASA to Use Artificial Intelligence to Discover Rogue Exoplanets Wandering the Galaxy – Newsweek

Researchers have developed a new method to detect rogue planets outside the solar system, worlds that wander their galaxies alone without a parent star.

The technique, devised by NASA Goddard Space Flight Center scientist, Richard K. Barry, unites astronomy's futurein the form of the soon-to-launch Nancy Grace Roman Space Telescopewith its past, a method used by 19th-century astronomers to measure distances.

The Contemporaneous LEnsing Parallax and Autonomous TRansient Assay (CLEoPATRA) mission will use parallax to measure distances, but the method will be bolstered by artificial intelligence (AI) developed by Dr. Greg Olmschenk.

Olmschenk's program, RApid Machine learnEd Triage (RAMjET), will learn patterns through provided examples filtering out useless information and ensuring that of the millions of stars observed by CLEoPATRA per hour, only useful information is transmitted back to Earth.

Recent research published in The Astronomical Journal suggests that exoplanets that exist in the Universe without a parent star could be more common than stars themselves, but until now spotting them has been difficult.

"The difficulty with detecting rogue planets is that they emit essentially no light. Since detecting light from an object is the main tool astronomers use to find objects, rogue planets have been elusive," the author of that paper and Thomas Jefferson professor for Discovery and Space Exploration at Ohio State University, Scott Gaudi, told Newsweek.

The most powerful method of spotting exoplanetsplanets outside the solar systemis through the dips in light they cause as they pass in front of their parent stars. This transit method has resulted in the discovery of thousands of worlds added to the exoplanet catalog, but it doesn't work for planets that don't have host stars.

One way to spot rogue exoplanets is to wait until they cross between a distant Milky Way star and our telescopes here on Earth intercepting the light from that star. When this happens, a phenomenon called gravitational lensing, the bending of light caused by a massive object, actually causes the light from that star to brighten.

CLEoPATRA will exploit this brightening, which is called microlensing when it involves a lensing object of small mass like a planet, and use parallax to measure the distance to these rogue worlds.

"Roman [Space Telescope] will use a technique called gravitational microlensing to find rogue planets, which relies only on the gravity and thus the mass of the planet, and doesn't require detecting any light from the planet," Gaudi said

As microlensing events are both unpredictable and exceedingly rare, a telescope must monitor hundreds of millions of stars nearly continuously to spot them. And that takes a wide-field space telescope like Nancy Grace Roman Space Telescope.

Parallax is the apparent shift in the position of an object when it is observed from different positions. The most familiar example of this is holding a finger close to our face and looking at it with one eye, and then switching to the other. The finger will look like it has moved.

Astronomers in the 19th century used this phenomenon to measure the distances to close stars by observing how their positions shifted according to the background of more distant stellar objects.

Using parallax in conjunction with microlensing events works slightly differently, with separated observers relying on precisely synchronized clocks to measure the differences in time between their observations of the event. This time delay then allows observers to calculate the distance to the lensing exoplanet as well as its mass and size.

"CLEoPATRA would be at a great distance from the principal observatory, either Roman or a telescope on Earth," Barry said in a NASA press release. "The parallax signal should then permit us to calculate quite precise masses for these objects, thereby increasing scientific return."

The benefit in spotting rogue exoplanets isn't just increasing the already burgeoning exoplanet catalog. Exploring these worlds could also teach us more about how the planets in our solar system, including Earth, formed and evolved.

"We want to find multiple free-floating planets and try to obtain information about their masses, so we can understand what is common or not common at all," research assistant at Goddard and Ph.D. student at the Catholic University of America in Washington, Stela Ishitani Silva, said. "Obtaining the mass is important to understanding their planetary development."

If all goes according to plan, CLEoPATRA will launch on a Mars mission around the same time as the launch of the Nancy Grace Roman Space Telescope currently set for the mid-2020s.

"CLEoPATRA will permit us to estimate many high-precision masses for new planets detected by Roman and PRIME," said Barry. "And it may allow us to capture or estimate the actual mass of a free-floating planet for the first timenever been done before. So cool, and so exciting. Really, it's a new golden age for astronomy right now, and I'm just very excited about it."

Go here to see the original:
NASA to Use Artificial Intelligence to Discover Rogue Exoplanets Wandering the Galaxy - Newsweek

This Week in Washington IP: Ethics in Artificial Intelligence, Challenges with Carbon Removal and the USPTO Hosts the 2021 Hispanic Innovation and…

This week in Washington IP news, Congress is largely quiet except for a hearing of the House Artificial Intelligence Task Force regarding ethical frameworks for developing artificial intelligence (AI) applications in various industries. Elsewhere in D.C., the Center for Data Innovation explores data driven approaches in addressing e-commerce counterfeits, The Brookings Institution hosts a conversation with Susteons Shantanu Agarwal on the challenges of carbon removal tech, and the U.S. Patent and Trademark Office kicks off the 2021 Hispanic Innovation and Entrepreneurship Program with multiple fireside chats and a panel on building networks and resources available to the community of Hispanic innovators.

U.S. Patent and Trademark Office

Trademark Basics Boot Camp, Module 2: Registration Process Overview

At 2:00 PM on Tuesday, online video webinar.

This workshop, the second in the USPTOs eight-part Trademark Basics Boot Camp series, is designed to teach small business owners and entrepreneurs about different aspects of the trademark registration process. Topics covered in this workshop include trademark basics, application workflow, timeline overview and post-registration workflow overview.

House Task Force on Artificial Intelligence

Beyond I, Robot: Ethics, Artificial Intelligence, and the Digital Age

At 12:00 PM on Wednesday, online video webinar.

Ethics in robotics and artificial intelligence systems draws much of its foundation from the three laws of robotics developed by famed science fiction writer Isaac Asimov, which are predicated on the idea that AI systems are always meant to serve humans and never to harm them. With the advent of many AI technologies now upon us, several organizations have been developing ethical frameworks for AI applications that rely upon constant evaluation by human decision-makers and great transparency about the underlying goals guiding the development of particular algorithms. The witness panel for this hearing will include Meredith Broussard, Associate Professor, Arthur L. Carter Journalism Institute, New York University; Meg King, Director, Science and Technology Innovation Program, The Wilson Center; Miriam Vogel, President and CEO, EqualAI; Jeffrey Yong, Principal Advisor, Financial Stability Institute, Bank for International Settlements; and Aaron Cooper, Vice President for Global Policy, BSA The Software Alliance.

U.S. Patent and Trademark Office

2021 Hispanic Innovation and Entrepreneurship Program

At 1:00 PM on Wednesday, online video webinar.

This event features various leaders from the Hispanic community in innovation and entrepreneurship and offers an overview of innovation resources that are available to the innovation community. This event will feature a pair of fireside chats featuring Alejandra Y. Castillo, Assistant Secretary of Commerce for Economic Development; Nestor Ramirez, Technology Center Director, USPTO; Leandro Margulis, Inventor of Durable Radio-Frequency Identification (RFID) Device; and Marivelisse Santiago-Cordero, Senior Advisor to the Deputy Commissioner for Patents, USPTO. This event will also feature a discussion about building networks and finding mentors with a panel including Jennifer Garcia, COO, Latin Business Action Network, Stanford Latino Entrepreneurship Initiative; Olga Carmargo, CEO and Founder, FARO Associates LLC and Board Chair, Hispanic Alliance for Career Enhancement; Susana G. Baumann, President and CEO and Editor-in-Chief, Latinas in Business Inc.; Tito Leal, CFO, Prosperity Lab; and moderated by Juan Valentin, Education Program Advisor, Office of Education, USPTO.

The Brookings Institution

Carbon Removal Innovations and Their Challenges: A Conversation With Susteon President Shantanu Agarwal

At 2:00 PM on Wednesday, online video webinar.

Carbon removal technologies that can sequester airborne sources of carbon have the potential to play a critical role in mitigating climate change, but several promising carbon removal innovations remain stuck in basic research phases far from the commercialization pipeline. This event, part of The Brookings Institutions Reimagining Modern-Day Markets and Regulations series, will feature a fireside chat with Shantanu Agarwal, Co-Founder and President of climate impact technology firm Susteon Inc. Moderating the discussion with Agarwal will be Sanjay Patnaik, Director, Center on Regulations and Markets, and the Bernard L. Schwartz Chair in Economic Policy Development, Fellow, Economic Studies.

Center for Data Innovation

A Data-Driven Approach to Combatting Counterfeit Goods in E-Commerce

At 1:00 PM on Thursday, online video webinar.

E-commerce has proved to be a boon to counterfeiters looking to exploit popular brands and fool American consumers into purchasing knockoff goods. This event will explore a new report issued by the National Intellectual Property Rights Center discussing the marketplace response to best practices developed by public and private entities looking to stem the tide of counterfeits sold via online platforms. This event will feature a discussion with a panel including Matthew C. Allen, Director, National Intellectual Property Rights Coordination Center; Christa Brozowski, Senior Manager of Public Policy, Amazon; Sara Decker, Director of Federal Government Affairs, Walmart; Piotr Stryszowski, Senior Economist, OECD; and moderated by Daniel Castro, Director, Center for Data Innovation.

U.S. Patent and Trademark Office

The Path to a Patent, Part II: Drafting Provisional Patent Applications

At 2:00 PM on Thursday, online video webinar.

This workshop, the second in the USPTOs eight part Path to a Patent series, is designed to teach prospective patent applicants about the key differences between provisional and nonprovisional patent applications. Topics covered include filing requirements, fees and different ways to file a provisional patent application.

Hudson Institute

Powering Innovation: Advanced Batteries and Critical Supply Chains

At 2:30 PM on Thursday, online video webinar.

Both the United States and China have been taking action on securing supply chains on certain products and components that are critical to national security, advanced batteries being one of the sectors identified by both nations as a supply chain priority. Advanced battery technologies have potential applications in electric vehicles, which many governments have been subsidizing to meet climate and emissions goals, as well as in national defense by enabling distributed operations in battlefield scenarios. The first panel for this event, discuss distributed operations and advanced batteries, will include Heather Penny, Senior Fellow, Mitchell Institute for Aerospace Studies; LTG Eric Wesley (Ret.), Former Deputy Commanding General, Army Futures Command, and Director, Futures and Concepts Center; Bryan Clark, Senior Fellow and Director, Center for Defense Concepts and Technology, Hudson Institute; and moderated by Nadia Schadlow, Senior Fellow, Hudson Institute. The second panel, discussing the U.S. governments role in promoting innovation, will include the Honorable Ellen Lord, Former Undersecretary of Defense for Acquisition and Sustainment; the Honorable Kimberly Reed, Former Chairman of the Board of Directors, President and CEO, U.S. Import-Export Bank; Mike Brown, Director, Defense Innovation Unit, U.S. Department of Defense; and moderated by Arthur Herman, Senior Fellow and Director, Quantum Alliance Initiative, Hudson Institute. The third panel, discussing China, supply chains and economic coercion, will include Anthony Vinci, Adjunct Senior Fellow, CNAS; Pavneet Singh, Non-Resident Senior Fellow, The Brookings Institution; John Lee, Senior Fellow, Hudson Institute; and moderated by Nadia Schadlow, Senior Fellow, Hudson Institute.

Information Technology & Innovation Foundation

Can GDPRs Automated Decision Opt-Out Be Improved Without Harming Users?

At 10:00 AM on Friday, online video webinar.

In the nearly two years that have elapsed since the UK government completed their Brexit tradition out of the European Union, the country has been charting its own course forward on legal matters and in recent weeks the UK government has been eyeing changes to Article 22 of the countrys General Data Protection Regulation (GDPR). Article 22 of the GDPR governs restrictions to automated processing of decisions for a data subject, and the UK governments moves have opened a discussion on the feasibility of changing protections against automated decision-making processes. This event will feature a discussion with a panel including Omar Tene, Former Vice President, International Association of Privacy Professionals; Isabelle de Pauw, Head of Data Rights, Domestic Data Protection and Data Rights Team, Department for Digital, Culture, Media and Sport; Chris Elwell-Sutton, Senior Privacy Counsel and Data Protection Officer, CIBC Capital Markets; Andrew Orlowski, Technology Commentator, Daily Telegraph; Kristian Stout, Director of Innovation Policy, International Center for Law & Economics; and moderated by Benjamin Mueller, Senior Policy Analyst, Center for Data Innovation.

U.S. Patent and Trademark Office

Attend the Trademark Public Advisory Committee Quarterly Meeting

At 10:00 AM on Friday, online video webinar.

On Friday morning, the Trademark Public Advisory Committee (TPAC) of the USPTO will convene their quarterly meeting to discuss issues related to the agencys trademark activities, including a review of policies, goals, budget, performance and user fees.

Image Source: Deposit PhotosAuthor: sborisovImage ID: 30853945

Read the original post:
This Week in Washington IP: Ethics in Artificial Intelligence, Challenges with Carbon Removal and the USPTO Hosts the 2021 Hispanic Innovation and...

IBM and Raytheon Technologies to Collaborate on Artificial Intelligence, Cryptography and Quantum Technologies – HPCwire

ARMONK, N.Y.,Oct. 11, 2021 IBM and Raytheon Technologies will jointly develop advanced artificial intelligence, cryptographic and quantum solutions for the aerospace, defense and intelligence industries, including the federal government, as part of a strategic collaboration agreement the companies announced today.

Artificial intelligence and quantum technologies give aerospace and government customers the ability to design systems more quickly, better secure their communications networks and improve decision-making processes. By combining IBMs breakthrough commercial research with Raytheon Technologies own research, plus aerospace and defense expertise, the companies will be able to crack once-unsolvable challenges.

The rapid advancement of quantum computing and its exponential capabilities has spawned one of the greatest technological races in recent history one that demands unprecedented agility and speed, saidDario Gil, senior vice president, IBM, and director of Research. Our new collaboration with Raytheon Technologies will be a catalyst in advancing these state-of-the-art technologies combining their expertise in aerospace, defense and intelligence with IBMs next-generation technologies to make discovery faster, and the scope of that discovery larger than ever.

In addition to artificial intelligence and quantum, the companies will jointly research and develop advanced cryptographic technologies that lie at the heart of some of the toughest problems faced by the aerospace industry and government agencies.

Take something as fundamental as encrypted communications, saidMark E. Russell, Raytheon Technologies chief technology officer. As computing and quantum technologies advance, existing cybersecurity and cryptography methods are at risk of becoming vulnerable. IBM and Raytheon Technologies will now be able to collaboratively help customers maintain secure communications and defend their networks better than previously possible.

The companies are building a technical collaboration team to quickly insert IBMs commercial technologies into active aerospace, defense and intelligence programs. The same team will also identify promising technologies for jointly developing long-term system solutions by investing research dollars and talent.

About IBM

IBM is a leading global hybrid cloud and AI, and business services provider, helping clients in more than 175 countries capitalize on insights from their data, streamline business processes, reduce costs and gain the competitive edge in their industries. Nearly 3,000 government and corporate entities in critical infrastructure areas such as financial services, telecommunications and healthcare rely on IBMs hybrid cloud platform and Red Hat OpenShift to affect their digital transformations quickly, efficiently, and securely. IBMs breakthrough innovations in AI, quantum computing, industry-specific cloud solutions and business services deliver open and flexible options to our clients. All of this is backed by IBMs commitment to trust, transparency, responsibility, inclusivity, and service. For more information, visitwww.ibm.com.

About Raytheon Technologies

Raytheon Technologies Corporation is an aerospace and defense company that provides advanced systems and services for commercial, military and government customers worldwide. With four industry-leading businesses Collins Aerospace Systems, Pratt & Whitney, Raytheon Intelligence & Space and Raytheon Missiles & Defense the company delivers solutions that push the boundaries in avionics, cybersecurity, directed energy, electric propulsion, hypersonics, and quantum physics. The company, formed in 2020 through the combination of Raytheon Company and the United Technologies Corporation aerospace businesses, is headquartered inWaltham, Massachusetts.

Source: IBM

Visit link:
IBM and Raytheon Technologies to Collaborate on Artificial Intelligence, Cryptography and Quantum Technologies - HPCwire

The AI-Enabled Telco Takes Shape: Why Telcos Are Using Artificial Intelligence To Rollout Their 5G Services – Woburn Daily Times

BOSTON, Oct. 11, 2021 /PRNewswire/ --Over the next five years, Bain & Company expects 5G to enter the mainstream, gaining popularity through accelerated deployment by telcos, affordable handsets and other major uses for the technology. According to the firm's analysis,the adoption of 5G is expected to be fasterin its first seven years2018 to 2025thanthe adoption of4G in the seven yearsfollowingits marketdebut in 2009.

Bain & Company's research shows that the number of 5G connections worldwide will triple from less than 700 million today tomore than 2.1 billion by 2025. This strong momentum reflects heavy operator investment in 5G infrastructure, a gradual expansion of 5G use cases and a global hunger for data connectivitywhich has increasingly surged during the pandemic. Yet, despite this insurgence, many telcos still struggle to reap the full rewards that 5G has to offer. In Bain's new report, AI = ROI: How Artificial Intelligence Is (Already) Solving the 5G Equation, the firm explores how operators are using artificial intelligence to accrue a better return on investment (ROI) from 5G deployment.

"Artificial intelligence is already being used by leading telcos to gain a strategic advantage in 5G," said Herbert Blum, head of Bain & Company's Global Communications, Media & Entertainment practice, "But being AI-native requires more than an optimization of existing business processes or workflow overlays. It demands that the role of employees across all functions evolves in partnership with the technology as well."

Bain's new research shows how a telco that uses AI tools in its 5G rollout could develop a differentiated capability for putting the right infrastructure in the right place, with surgical precision and at dizzying scale. For instance, one major ROI challenge with 5Gstems fromthespectrum bandsthat the technology uses. 5G's higher-frequency signalsdo not travel as far, or penetrate buildings as well as the lower-frequency signals used by 4Grequiring operators to deployas many as 100 times the number of cellsused by4G for their 5G services. AI can help solve this engineering conundrum, and one of the sector's toughest challenges, by accelerating decisions from months and weeks to days and minutes, with a precision and scale that exceeds what is humanly possible.

"Even digitally native telcos are not immune to the complexities brought by 5G adoption, particularly if they still rely on a labor-intensive workflow," said Darryn Lowe, a leader in Bain & Company's Communications, Media and Entertainmentpractice. "In the coming years, winning telcos will be operators that use 5G, and other high-stakes business areas, as a proving ground for the deeper AI capabilities they'll need to gain to remain competitive."

Editor's Note:To arrange an interview, contact Katie Ware atkatie.ware@bain.comor +1 646 562 8102.

About Bain & Company

Bain & Company is a global consultancy that helps the world's most ambitious change makers define the future.

Across 63 offices in 38 countries, we work alongside our clients as one team with a shared ambition to achieve extraordinary results, outperform the competition, and redefine industries. We complement our tailored, integrated expertise with a vibrant ecosystem of digital innovators to deliver better, faster, and more enduring outcomes. Our 10-year commitment to invest more than $1 billion in pro bono services brings our talent, expertise, and insight to organizations tackling today's urgent challenges in education, racial equity, social justice, economic development, and the environment. We earned a gold rating from EcoVadis, the leading platform for environmental, social, and ethical performance ratings for global supply chains, putting us in the top 2% of all companies. Since our founding in 1973, we have measured our success by the success of our clients, and we proudly maintain the highest level of client advocacy in the industry.

Media Contacts:

Katie Ware

Bain & Company

Tel: +1 646 562 8107

katie.ware@bain.com

View original content to download multimedia:https://www.prnewswire.com/news-releases/the-ai-enabled-telco-takes-shape-why-telcos-are-using-artificial-intelligence-to-rollout-their-5g-services-301397181.html

SOURCE Bain & Company

View post:
The AI-Enabled Telco Takes Shape: Why Telcos Are Using Artificial Intelligence To Rollout Their 5G Services - Woburn Daily Times