Artificial intelligence – The Turing test | Britannica

In 1950 Turing sidestepped the traditional debate concerning the definition of intelligence, introducing a practical test for computer intelligence that is now known simply as the Turing test. The Turing test involves three participants: a computer, a human interrogator, and a human foil. The interrogator attempts to determine, by asking questions of the other two participants, which is the computer. All communication is via keyboard and display screen. The interrogator may ask questions as penetrating and wide-ranging as he or she likes, and the computer is permitted to do everything possible to force a wrong identification. (For instance, the computer might answer, No, in response to, Are you a computer? and might follow a request to multiply one large number by another with a long pause and an incorrect answer.) The foil must help the interrogator to make a correct identification. A number of different people play the roles of interrogator and foil, and, if a sufficient proportion of the interrogators are unable to distinguish the computer from the human being, then (according to proponents of Turings test) the computer is considered an intelligent, thinking entity.

In 1991 the American philanthropist Hugh Loebner started the annual Loebner Prize competition, promising a $100,000 payout to the first computer to pass the Turing test and awarding $2,000 each year to the best effort. However, no AI program has come close to passing an undiluted Turing test.

Continue reading here:
Artificial intelligence - The Turing test | Britannica

What is artificial intelligence (AI)? Definition, types, ethics …

Check out all the on-demand sessions from the Intelligent Security Summit here.

The words artificial intelligence (AI) have been used to describe the workings of computers for decades, but the precise meaning shifted with time. Today, AI describes efforts to teach computers to imitate a humans ability to solve problems and make connections based on insight, understanding and intuition.

Artificial intelligence usually encompasses the growing body of work in technology on the cutting edge that aims to train the technology to accurately imitate or in some cases exceed the capabilities of humans.

Older algorithms, when they grow commonplace, tend to be pushed out of the tent. For instance, transcribing human voices into words was once an active area of research for scientists exploring artificial intelligence. Now it is a common feature embedded in phones, cars and appliances and it isnt described with the term as often.

Today, AI is often applied to several areas of research:

Intelligent Security Summit On-Demand

Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.

There is a wide range of practical applicability to artificial intelligence work. Some chores are well-understood and the algorithms for solving them are already well-developed and rendered in software. They may be far from perfect, but the application is well-defined. Finding the best route for a trip, for instance, is now widely available via navigation applications in cars and on smartphones.

Other areas are more philosophical. Science fiction authors have been writing about computers developing human-like attitudes and emotions for decades, and some AI researchers have been exploring this possibility. While machines are increasingly able to work autonomously, general questions of sentience, awareness or self-awareness remain open and without a definite answer.

[Related: Sentient artificial intelligence: Have we reached peak AI hype?]

AI researchers often speak of a hierarchy of capability and awareness. The directed tasks at the bottom are often called narrow AI or reactive AI. These algorithms can solve well-defined problems, sometimes without much direction from humans. Many of the applied AI packages fall into this category.

The notion of general AI or self-directed AI applies to software that could think like a human and initiate plans outside of a well-defined framework. There are no good examples of this level of AI at this time, although some developers sometimes like to suggest that their tools are beginning to exhibit some of this independence.

Beyond this is the idea of super AI, a package that can outperform humans in reasoning and initiative. These are largely discussed hypothetically by advanced researchers and science fiction authors.

In the last decade, many ideas from the AI laboratory have found homes in commercial products. As the AI industry has emerged, many of the leading technology companies have assembled AI products through a mixture of acquisitions and internal development. These products offer a wide range of solutions, and many businesses are experimenting with using them to solve problems for themselves and their customers.

Leading companies have invested heavily in AI and developed a wide range of products aimed at both developers and end users. Their product lines are increasingly diverse as the companies experiment with different tiers of solutions to a wide range of applied problems. Some are more polished and aimed at the casual computer user. Others are aimed at other programmers who will integrate the AI into their own software to enhance it. The largest companies all offer dozens of products now and its hard to summarize their increasingly varied options.

IBM has long been one of the leaders in AI research. Its AI-based competitor in the TV game Jeopardy, Watson, helped ignite the recent interest in AI when it beat humans in 2011 demonstrating how adept the software could be at handling more general questions posed in human language.

Since then, IBM has built a broad collection of applied AI algorithms under the Watson brand name that can automate decisions in a wide range of business applications like risk management, compliance, business workflow and devops. These solutions rely upon a mixture of natural language processing and machine learning to create models that can either make production decisions or watch for anomalies. In one case study of its applications, for instance, the IBM Safer Payments product prevented $115 million worth of credit card fraud.

Another example, Microsofts AI platform offers a wide range of algorithms, both as products and services available through Azure. The company also targets machine learning and computer vision applications and like to highlight how their tools search for secrets inside extremely large data sets. Its Megatron-Turing Natural Language Generation model (MT-NLG), for instance, has 530 billion parameters to model the nuances of human communication. Microsoft is also working on helping businesses processes shift from being automated to becoming autonomous by adding more intelligence to handle decision-making. Its autonomous packages are, for instance, being applied to both the narrow problems of keeping assembly lines running smoothly and the wider challenges of navigating drones.

Google developed a strong collection of machine learning and computer vision algorithms that it uses for both internal projects indexing the web while also reselling the services through their cloud platform. It has pioneered some of the most popular open-source machine learning platforms like TensorFlow and also built custom hardware for speeding up training models on large data sets. Googles Vertex AI product, for instance, automates much of the work of turning a data set into a working model that can then be deployed. The company also offers a number of pretrained models for common tasks like optical character recognition or conversational AI that might be used for an automated customer service agent.

In addition, Amazon also uses a collection of AI routines internally in its retail website, while marketing the same backend tools to AWS users. Products like Personalize are optimized for offering customers personalized recommendations on products. Rekognitition offers predeveloped machine vision algorithms for content moderation, facial recognition and text detection and conversion. These algorithms also have a prebuilt collection of models of well-known celebrities, a useful tool for media companies. Developers who want to create and train their own models can also turn to products like SageMaker which automates much of the workload for business analysts and data scientists.

Facebook also uses artificial intelligence to help manage the endless stream of images and text posts. Algorithms for computer vision classify uploaded images, and text algorithms analyze the words in status updates. While the company maintains a strong research team, the company does not actively offer standalone products for others to use. It does share a number of open-source projects like NeuralProphet, a framework for decision-making.

Additionally, Oracle is integrating some of the most popular open-source tools like Pytorch and Tensorflow into their data storage hierarchy to make it easier and faster to turn information stored in Oracle databases into working models. They also offer a collection of prebuilt AI tools with models for tackling common challenges like anomaly detection or natural language processing.

New AI companies tend to be focused on one particular task, where applied algorithms and a determined focus will produce something transformative. For instance, a wide-reaching current challenge is producing self-driving cars. Startups like Waymo, Pony AI, Cruise Automation and Argo are four major startups with significant funding who are building the software and sensor systems that will allow cars to navigate themselves through the streets. The algorithms involve a mixture of machine learning, computer vision, and planning.

Many startups are applying similar algorithms to more limited or predictable domains like warehouse or industrial plants. Companies like Nuro, Bright Machines and Fetch are just some of the many that want to automate warehouses and industrial spaces. Fetch also wants to apply machine vision and planning algorithms to take on repetitive tasks.

A substantial number of startups are also targeting jobs that are either dangerous to humans or impossible for them to do. Against this backdrop, Hydromea is building autonomous underwater drones that can track submerged assets like oil rigs or mining tools. Another company, Solinus, makes robots for inspecting narrow pipes.

Many startups are also working in digital domains, in part because the area is a natural habitat for algorithms, since the data is already in digital form. There are dozens of companies, for instance, working to simplify and automate routine tasks that are part of the digital workflow for companies. This area, sometimes called robotic process automation (RPA), rarely involves physical robots because it works with digital paperwork or chit. However, it is a popular way for companies to integrate basic AI routines into their software stack. Good RPA platforms, for example, often use optical character recognition and natural language processing to make sense of uploaded forms in order to simplify the office workload.

Many companies also depend upon open-source software projects with broad participation. Projects like Tensorflow or PyTorch are used throughout research and development organizations in universities and industrial laboratories. Some projects like DeepDetect, a tool for deep learning and decision-making, are also spawning companies that offer mixtures of support and services.

There are also hundreds of effective and well-known open-source projects used by AI researchers. OpenCV, for instance, offers a large collection of computer vision algorithms that can be adapted and integrated with other stacks. It is used frequently in robotics, medical projects, security applications and many other tasks that rely upon understanding the world through a camera image or video.

There are some areas where AI finds more success than others. Statistical classification using machine learning is often pretty accurate but it is often limited by the breadth of the training data. These algorithms often fail when they are asked to make decisions in new situations or after the environment has shifted substantially from the training corpus.

Much of the success or failure depends upon how much precision is demanded. AI tends to be more successful when occasional mistakes are tolerable. If the users can filter out misclassification or incorrect responses, AI algorithms are welcomed. For instance, many photo storage sites offer to apply facial recognition algorithms to sort photos by who appears in them. The results are good but not perfect, but users can tolerate the mistakes. The field is largely a statistical game and succeeds when judged on a percentage basis.

A number of the most successful applications dont require especially clever or elaborate algorithms, but depend upon a large and well-curated dataset organized by tools that are now manageable. The problem once seemed impossible because of the scope, until large enough teams tackled it. Navigation and mapping applications like Waze just use simple search algorithms to find the best path but these apps could not succeed without a large, digitized model of the street layouts.

Natural language processing is also successful with making generalizations about the sentiment or basic meaning in a sentence but it is frequently tripped up by neologisms, slang or nuance. As language changes or processes, the algorithms can adapt, but only with pointed retraining. They also start to fail when the challenges are outside a large training set.

Robotics and autonomous cars can be quite successful in limited areas or controlled spaces but they also face trouble when new challenges or unexpected obstacles appear. For them, the political costs of failure can be significant, so developers are necessarily cautious on leaving the envelope.

Indeed, determining whether an algorithm is capable or a failure often depends upon criteria that are politically determined. If the customers are happy enough with the response, if the results are predictable enough to be useful, then the algorithms succeed. As they become taken for granted, they lose the appellation of AI.

If the term is generally applied to the topics and goals that are just out of reach, if AI is always redefined to exclude the simple, well-understood solutions, then AI will always be moving toward the technological horizon. It may not be 100% successful presently, but when applied in specific cases, it can be tantalizingly close.

[Read more: The quest for explainable AI]

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Here is the original post:
What is artificial intelligence (AI)? Definition, types, ethics ...

What is Artificial Intelligence (AI) & Why is it Important? – Accenture

No artificial intelligence introduction would be complete without addressing AI ethics. AI is moving at a blistering pace and, as with any powerful technology, organizations need to build trust with the public and be accountable to their customers and employees.

At Accenture, we define responsible AI as the practice of designing, building and deploying AI in a manner that empowers employees and businesses and fairly impacts customers and societyallowing companies to engender trust and scale AI with confidence.

TrustEvery company using AI is subject to scrutiny. Ethics theater, where companies amplify their responsible use of AI through PR while partaking in unpublicized gray-area activities, is a regular issue. Unconscious bias is yet another. Responsible AI is an emerging capability aiming to build trust between organizations and both their employees and customers.

Data securityData privacy and the unauthorized use of AI can be detrimental both reputationally and systemically. Companies must design confidentiality, transparency and security into their AI programs at the outset and make sure data is collected, used, managed and stored safely and responsibly.

Transparency and explainabilityWhether building an ethics committee or revising their code of ethics, companies need to establish a governance framework to guide their investments and avoid ethical, legal and regulatory risks. As AI technologies become increasingly responsible for making decisions, businesses need to be able to see how AI systems arrive at a given outcome, taking these decisions out of the black box. A clear governance framework and ethics committee can help with the development of practices and protocols that ensure their code of ethics is properly translated into the development of AI solutions.

ControlMachines dont have minds of their own, but they do make mistakes. Organizations should have risk frameworks and contingency plans in place in the event of a problem. Be clear about who is accountable for the decisions made by AI systems, and define the management approach to help escalate problems when necessary.

More here:
What is Artificial Intelligence (AI) & Why is it Important? - Accenture

Artificial Intelligence – an overview | ScienceDirect Topics

Machine learning tools in computational pathology: types of artificial intelligence

AI is not really a new concept. The term AI was first used by John McCarthy in 1955 [4]. He subsequently organized the Dartmouth conference in 1956 which started AI as a field. The label AI means very different things to different observers. For example, some commenters recognize divisions of AI as statistical modeling (calculating regression models and histograms) versus machine learning (Bayes, random forests, support vector machines [SVMs], shallow neural networks, or artificial neural network) versus deep learning (deep neural networks and CNNs). Others recognize categories of traditional AI versus data-driven deep learning AI. In this comparison, traditional AI starts with a human understanding of a domain and seeks to condition that knowledge into models which represent the world of that knowledge domain. When current lay commentators refer to AI, however, they are usually referring to data-driven deep learning AI, which removes the domain knowledge inspired feature extraction part from the pipeline and develops knowledge of a domain by observing large numbers of examples from that domain.

The design approaches of traditional AI versus data-driven deep learning AI are quite different. The architects of traditional AI learning systems focus on building generic models. They often begin with a human understanding of the world through the statement of a prior understanding of the domain (see Fig.11.1), develop metrics representing that prior, extract data using those metrics, and ask humans to apply class labels of interest to these data. These labels are then used to train the system to learn a hyperplane which separates one class from another. Traditional AI learning systems will often be ineffective in capturing the granular details of a problem, and if those details are important, a traditional AI learning system may model poorly.

A data-driven deep learning machine learning system on the other hand can capitalize on the capture of fine details of a system, but it may not illuminate an understanding of the big picture of the problem. Data-driven models are sometimes characterized as black box learning systems which produce classifications or transformed representations of real-world data but without an explanation of the factors that influence the decisions of the learning system. Traditional and deep learning models are compared in Table11.1.

Data-driven deep learning AI approaches have limited humanmachine interactions constrained to a short training period from human annotated data and human verification of the classifier output of the learning system. In contrast, in traditional AI learning systems, human experts can provide actionable insights and bring these rich understandings to the learning system in the form of a prior understanding of the domain. A prior can function as an advanced starting point for a deep learning AI system. The broad understanding of the world that humans possess with their reasoning and inferencing abilities, efficiency in learning, and the ability to transfer knowledge gained from one context to other domains is not very well understood. Framing data-driven deep learning systems with the human understanding of what is offers a way forward for creating partnerships between HI and AI in advanced learning systems. There is need for explainable AI (XAI) which can explain the inferences, conclusions, and decision processes of learning systems. There is much work that needs to be done to bridge the gap between machine and HI.

See more here:
Artificial Intelligence - an overview | ScienceDirect Topics

Meet SymbolicAI: The Powerful Framework That Combines The Strengths Of Symbolic Artificial Intelligence (AI) And Large Language Models – MarkTechPost

Meet SymbolicAI: The Powerful Framework That Combines The Strengths Of Symbolic Artificial Intelligence (AI) And Large Language Models  MarkTechPost

View original post here:
Meet SymbolicAI: The Powerful Framework That Combines The Strengths Of Symbolic Artificial Intelligence (AI) And Large Language Models - MarkTechPost

What is Cryptography? – Cryptography Explained – AWS

Asymmetric (or public-key)cryptography consists of a broad set of algorithms. These are based on mathematical problems that are relatively easy to perform in one direction, but which cannot be easily reversed.

One famous example of this type of problem is the factoring problem: for carefully chosen prime numbers p and q, we can compute the product N=p*q quickly. However, given only N, it is very hard to recover p and q.

A common public-key cryptographic algorithm based on the factoring problem is the Rivest-Shamir-Adleman (RSA) function. When combined with an appropriate padding scheme, RSA can be used for multiple purposes, including asymmetric encryption.

An encryption scheme is calledasymmetricif it uses one keythe public keyto encrypt data, and a different but mathematically related keythe private keyto decrypt data.

It must be computationally infeasible to determine the private key if the only thing one knows is the public key. Therefore, the public key can be distributed broadly while the private key is kept secret and secure. Together the keys are referred to as akey pair.

One popular asymmetric encryption scheme is RSA-OAEP, which is a combination of the RSA function with the Optimal Asymmetric Encryption Padding (OAEP) padding scheme. RSA-OAEP is typically only used to encrypt small amounts of data because it is slow and has ciphertexts which are much larger than the plaintext.

Continued here:
What is Cryptography? - Cryptography Explained - AWS

AI vs. Machine Learning vs. Deep Learning vs. Neural Networks … – IBM

These terms are often used interchangeably, but what are the differences that make them each a unique technology?

Technology is becoming more embedded in our daily lives by the minute, and in order to keep up with the pace of consumer expectations, companies are more heavily relying on learning algorithms to make things easier. You can see its application in social media (through object recognition in photos) or in talking directly todevices (like Alexa or Siri).

These technologies are commonly associated with artificial intelligence, machine learning, deep learning, and neural networks, and while they do all play a role, these terms tend to be used interchangeably in conversation, leading to some confusion around the nuances between them. Hopefully, we can use this blog post to clarify some of the ambiguity here.

Perhaps the easiest way to think about artificial intelligence, machine learning, neural networks, and deep learning is to think of them like Russian nesting dolls. Each is essentially a component of the prior term.

That is, machine learning is a subfield of artificial intelligence. Deep learning is a subfield of machine learning, and neural networks make up the backbone of deep learning algorithms. In fact, it is the number of node layers, or depth, of neural networks that distinguishes a single neural network from a deep learning algorithm, which must have more than three.

Neural networksand more specifically, artificial neural networks (ANNs)mimic the human brain through a set of algorithms. At a basic level, a neural network is comprised of four main components: inputs, weights, a bias or threshold, and an output. Similar to linear regression, the algebraic formula would look something like this:

From there, lets apply it to a more tangible example, like whether or not you should order a pizza for dinner. This will be our predicted outcome, or y-hat. Lets assume that there are three main factors that will influence your decision:

Then, lets assume the following, giving us the following inputs:

For simplicity purposes, our inputs will have a binary value of 0 or 1. This technically defines it as a perceptron as neural networks primarily leverage sigmoid neurons, which represent values from negative infinity to positive infinity. This distinction is important since most real-world problems are nonlinear, so we need values which reduce how much influence any single input can have on the outcome. However, summarizing in this way will help you understand the underlying math at play here.

Moving on, we now need to assign some weights to determine importance. Larger weights make a single inputs contribution to the output more significant compared to other inputs.

Finally, well also assume a threshold value of 5, which would translate to a bias value of 5.

Since we established all the relevant values for our summation, we can now plug them into this formula.

Using the following activation function, we can now calculate the output (i.e., our decision to order pizza):

In summary:

Y-hat (our predicted outcome) = Decide to order pizza or not

Y-hat = (1*5) + (0*3) + (1*2) - 5

Y-hat = 5 + 0 + 2 5

Y-hat = 2, which is greater than zero.

Since Y-hat is 2, the output from the activation function will be 1, meaning that we will order pizza (I mean, who doesn't love pizza).

If the output of any individual node is above the specified threshold value, that node is activated, sending data to the next layer of the network. Otherwise, no data is passed along to the next layer of the network. Now, imagine the above process being repeated multiple times for a single decision as neural networks tend to have multiple hidden layers as part of deep learning algorithms. Each hidden layer has its own activation function, potentially passing information from the previous layer into the next one. Once all the outputs from the hidden layers are generated, then they are used as inputs to calculate the final output of the neural network. Again, the above example is just the most basic example of a neural network; most real-world examples are nonlinear and far more complex.

The main difference between regression and a neural network is the impact of change on a single weight. In regression, you can change a weight without affecting the other inputs in a function. However, this isnt the case with neural networks. Since the output of one layer is passed into the next layer of the network, a single change can have a cascading effect on the other neurons in the network.

See this IBM Developer article for a deeper explanation of the quantitative concepts involved in neural networks.

While it was implied within the explanation of neural networks, its worth noting more explicitly. The deep in deep learning is referring to the depth of layers in a neural network. A neural network that consists of more than three layerswhich would be inclusive of the inputs and the outputcan be considered a deep learning algorithm. This is generally represented using the following diagram:

Most deep neural networks are feed-forward, meaning they flow in one direction only from input to output. However, you can also train your model through backpropagation; that is, move in opposite direction from output to input. Backpropagation allows us to calculate and attribute the error associated with each neuron, allowing us to adjust and fit the algorithm appropriately.

As we explain in our Learn Hub article on Deep Learning, deep learning is merely a subset of machine learning. The primary ways in which they differ is in how each algorithm learns and how much data each type of algorithm uses. Deep learning automates much of the feature extraction piece of the process, eliminating some of the manual human intervention required. It also enables the use of large data sets, earning itself the title of "scalable machine learning" in this MIT lecture. This capability will be particularly interesting as we begin to explore the use of unstructured data more, particularly since 80-90% of an organizations data is estimated to be unstructured.

Classical, or "non-deep", machine learning is more dependent on human intervention to learn. Human experts determine the hierarchy of features to understand the differences between data inputs, usually requiring more structured data to learn. For example, let's say that I were to show you a series of images of different types of fast food, pizza, burger, or taco. The human expert on these images would determine the characteristics which distinguish each picture as the specific fast food type. For example, the bread of each food type might be a distinguishing feature across each picture. Alternatively, you might just use labels, such as pizza, burger, or taco, to streamline the learning process through supervised learning.

"Deep" machine learning can leverage labeled datasets, also known as supervised learning, to inform its algorithm, but it doesnt necessarily require a labeled dataset. It can ingest unstructured data in its raw form (e.g. text, images), and it can automatically determine the set of features which distinguish "pizza", "burger", and "taco" from one another.

For a deep dive into the differences between these approaches, check out "Supervised vs. Unsupervised Learning: What's the Difference?"

By observing patterns in the data, a deep learning model can cluster inputs appropriately. Taking the same example from earlier, we could group pictures of pizzas, burgers, and tacos into their respective categories based on the similarities or differences identified in the images. With that said, a deep learning model would require more data points to improve its accuracy, whereas a machine learning model relies on less data given the underlying data structure. Deep learning is primarily leveraged for more complex use cases, like virtual assistants or fraud detection.

For further info on machine learning, check out the following video:

Finally, artificial intelligence (AI) is the broadest term used to classify machines that mimic human intelligence. It is used to predict, automate, and optimize tasks that humans have historically done, such as speech and facial recognition, decision making, and translation.

There are three main categories of AI:

ANI is considered weak AI, whereas the other two types are classified as strong AI. Weak AI is defined by its ability to complete a very specific task, like winning a chess game or identifying a specific individual in a series of photos. As we move into stronger forms of AI, like AGI and ASI, the incorporation of more human behaviors becomes more prominent, such as the ability to interpret tone and emotion. Chatbots and virtual assistants, like Siri, are scratching the surface of this, but they are still examples of ANI.

Strong AI is defined by its ability compared to humans. Artificial General Intelligence (AGI) would perform on par with another human while Artificial Super Intelligence (ASI)also known as superintelligencewould surpass a humans intelligence and ability. Neither forms of Strong AI exist yet, but ongoing research in this field continues. Since this area of AI is still rapidly evolving, the best example that I can offer on what this might look like is the character Dolores on the HBO show Westworld.

While all these areas of AI can help streamline areas of your business and improve your customer experience, achieving AI goals can be challenging because youll first need to ensure that you have the right systems in place to manage your data for the construction of learning algorithms. Data management is arguably harder than building the actual models that youll use for your business. Youll need a place to store your data and mechanisms for cleaning it and controlling for bias before you can start building anything. Take a look at some of IBMs product offerings to help you and your business get on the right track to prepare and manage your data at scale.

View original post here:
AI vs. Machine Learning vs. Deep Learning vs. Neural Networks ... - IBM

The Latest Google Research Shows how a Machine Learning ML Model that Provides a Weak Hint can Significantly Improve the Performance of an Algorithm…

The Latest Google Research Shows how a Machine Learning ML Model that Provides a Weak Hint can Significantly Improve the Performance of an Algorithm in Bandit-like Settings  MarkTechPost

More here:
The Latest Google Research Shows how a Machine Learning ML Model that Provides a Weak Hint can Significantly Improve the Performance of an Algorithm...