Knowledge Is Power: US Global Dominance Was Based on Information Control – The Good Men Project

One company, for five decades, allowed the US unparalleled access to information on allies and enemies alike.

May 15, 2020 by Enrique Dans Leave a Comment

Its the stuff of spy novels: The intelligence coup of the century is an expos by the Pulitzer Prize-winning Greg Miller in the Washington Post about how a Swiss cryptography company used by many countries to encrypt their communications was owned by the CIA, allowing the US government to spy on many of its allies and enemies for decades.

The story explains how 120 countries, from Iran to the Vatican, along with Latin American military juntas, India, Pakistan and many more, paid millions of dollars to Crypto AG, which was run by the US intelligence agency in association with its German counterpart. The operation was first called Thesaurus and then Rubicon, and shows the extent US intelligence knew what was going on in the world and that enabled it to bring so many countries under its control: this was a poker player in which the United States could see everybody elses hands.

The operation dates back almost almost five decades, the so-called Pax Americana, and has been revealed thanks to a classified file to which journalists from the Washington Post and the German public broadcaster ZDF had access. During that time, the CIA was able to monitor communications and share them with some of its allies, from the hostage crisis in Iran to Argentinean transmissions during the Falklands War. Neither Russia nor China, the global counterparts of the United States, ever used the services of Crypto AG, possibly because they were suspicious of its highly secretive shareholders.

The German spy agency, BND, abandoned the operation in 1990 for fear that it would be discovered and exposed, but the United States simply acquired its share of the stock and continued to exploit access to communications until 2018, when it finally sold its shares and liquidated the company in Liechtenstein, a state that allows the secrecy of such operations. Why did they divest themselves of such a strategic asset? At that time, access to cryptography became easier, a technology that could be incorporated into virtually any app or non-specialist device, and countries increasingly began to use their own alternatives to Crypto AG. That would largely explain the interest the US government took in developing spying and monitoring systems, which were detailed in Edward Snowdens revelations in 2013.

The worlds superpower based its supremacy on the control of information, on its capacity to secretly access the communications of other countries. Tangible proof of the importance of information systems, and of how one nation was able to deceive many others for decades and play the game of geopolitics with marked cards.

Previously published on Medium.com and is republished here under permission.

***

All Premium Members get to view The Good Men Project with NO ADS.

Need more info? A complete list of benefits is here.

Photo credit: Unsplash.com

Original post:
Knowledge Is Power: US Global Dominance Was Based on Information Control - The Good Men Project

On George Jackson and Julian Assange – LA Progressive

In the summer of 1971, Julian Assange was born in Queensland, Australia amidst surging protests against the Vietnam War. Weeks later, George Jackson was murdered by correctional officers inside San Quentin, Californias oldest prison. Jackson spent his entire adult life behind bars, a political prisoner of the US Empire. Today, Assange faces a similar fate, a lifetime of imprisonment for daring to expose the Empires war crimes. Incarceration is the inevitable outcome for those who dare to challenge imperial power, for as Jackson wrote presciently in Blood in My Eye: The ultimate expression of law is not orderits prison.

Because Jackson was a revolutionary Marxist who advocated armed revolutionary violence to take over the state and Assange is a cypherpunk anarchist who advocates technology-supported non-violence to curtail state power, it may seem that the two activists have little in common. But by understanding Assange and WikiLeaks through the lens of George Jacksons revolutionary philosophy, we can better appreciate how both Jackson and Assange dedicated themselves to challenging the US Empire in the name of self-determination for all peoples of the world.

Both Jackson and Assange dedicated themselves to challenging the US Empire in the name of self-determination for all peoples of the world.

Assange and Jackson met once in spirit. When Assange was twenty, he was arrested by the Australian Federal Police for working with his hacker groupThe International Subversivesto gain access to the networks of Nortel, a Canadian telecommunications company. Though Nortel was not the first system Assange and the Subversives had enteredamong their previous targets was the Pentagonit would be the last, for the authorities traced one of his partners modems and tracked down the whole group one by one.

I was alone and sad when [the police] came, Assange explains in his Unauthorized Autobiography. My wife and child had just left, and I had come to the end of my rope. My computer disks were strewn around the computer table. The squat was a mess, and I sat on the sofa readinga vision of things to comethe prison letters of George Jackson, kept in the toughest US prisons at the pleasure of the authorities. I was broken.

Fortunately for Assange, he would not have to experience what Jackson wrote of in Soledad Brother, for he was let off with a fine and probation because he didnt steal or destroy anything in the Nortel system. But it is easy to think that, during his seven years trapped inside the Ecuadoran embassy in London, Assange would remember Jacksons testimony from personal experience: Few men would enjoy total isolation. To be alone constantly is torture to normal men.

To be sure, Jackson and Assange are quite different ideologically speaking. As a Marxist-Leninist who took inspiration from Trotsky, Engels, Mao, and Fanon, Jackson always insisted that As revolutionaries, it is our objective to move ourselves and the people into actions that will culminate in the seizure of state power. Our real purpose is to redeem not merely ourselves but the whole nation and the whole community of nations from colonial-community economic repression. Jackson never wrote a page without reminding his readers that capitalism was the enemys system and that it must be destroyed along with the fascists who run it.

Jacksons revolutionary philosophy is captured with one quote from Mao: Every Communist must grasp the truth, Political power grows out of the barrel of a gun.

By contrast, Assange is largely inspired by the cypherpunks , a movement of hackers who in the 1990s wrestled cryptography away from the US government and distributed it over the internet, making private digital communication possible for everyone in the world. At the core of the cypherpunk philosophy, one commentator explains , was the belief that the great question of politics in the age of the internet was whether the state would strangle individual freedom and privacy through its capacity for electronic surveillance or whether autonomous individuals would eventually undermine and even destroy the state through their deployment of electronic weapons newly at hand. We were anarchists, Assange says of the cypherpunks, by temperament if not by political conviction.

Unlike Jacksons commitment to Maos dictum, Assange largely advocates the use of non-violent means of revolution by way of digital technology. In the future, he explains, power would not come from the barrel of a gun but from communications, and people would know themselves not by the imprimatur of a small and powerful coterie, but by the way they could disappear into a social network with huge political potential.

Notwithstanding these ideological differences, Jackson and Assange share three very important political positions.

The first political similarity between Jackson and Assange is that they both see the United States as the Empire, and they agree that it must be opposed. For Jackson, that Yankee brigand was the greatest imperialism of all time. While formal colonialism was fading from history, neocolonialism was emerging in the wake of decolonization. Not one to mistake a change the in Empire for the end of the Empire, Jackson observed that all US client states in Africa and Latin America were sections of Amerikan imperial infrastructure.

Likewise, identifying the US as the worlds sole remaining empire, Assange argues that nation-states of the Global South must protect themselves from the National Security Agency using encryption. Cryptography can protect not just the civil liberties and rights of individuals, but the sovereignty and independence of whole countries, solidarity between groups with common cause, and the project of global emancipation, he writes . It can be used to fight not just the tyranny of the state over the individual but the tyranny of the empire over smaller states.

The second political similarity between Jackson and Assange is that they both advocate self-determination for peoples around the world. Jackson was principally concerned with the self-determination of the Black Colony within the US, but he was also convinced that if the people of the Black Colony did not fight the Empire from within, then those wholike the Vietnamesewere subjected to the external violence and oppression of the US government would never be free. Jackson looked to India and China as model nations who had gained independence from Western Empire, and he sought to replicate such independence for colonized populations at home and around the world.

In a similar fashion, in 2014, Assange criticized The Interceptwho acquiesced to US government demands not to tell an entire country of people that the NSA was recording the audio of every single phone call made in the countryfor protect[ing] US assets from arrest for the mass infringement of the rights of another nations people. By denying an entire population the knowledge of its own victimization, this act of censorship denies each individual in that country the opportunity to seek an effective remedy, whether in international courts, or elsewhere, Assange added. Such censorship strips a nation of its right to self-determination on a matter which affects its whole population. The country was Afghanistan.

The third political similarity between Jackson and Assange is that they both believe science is the best means of opposing Empire and securing self-determination for the peoples of the world. Jacksons preferred method was guided by the subtle scientific principles of urban guerrilla warfare. Against the top-heavy intelligence bureaucracies of the US government, the revolutionary vanguard would use mobility (portable weapons), infiltration (moles in the police, military, and agencies), ambush (surprise attack), and camouflage (Nothing ever appears outwardly as it is).

Likewise, Assange has argued that everything WikiLeaks does is based on science. Not only is WikiLeaks built using cryptography, the cypherpunks tool, it also practices a form of what Assanges calls scientific journalism. We work with other media outlets to bring people the news, but also to prove it is true, he explains . Scientific journalism allows you to read a news story, then to click online to see the original document it is based on. That way you can judge for yourself: Is the story true? Did the journalist report it accurately? Just as scientists provide their data to be checked by other scientists, WikiLeaks provides its documents to be checked by the global public.

We work with other media outlets to bring people the news, but also to prove it is true.

Interestingly, there is a way in which the structure of WikiLeaks parallels Jacksons principles of guerrilla warfare. Its mobility is enabled by the use of computers and hidden servers in various jurisdictions around the world. Its infiltration is embodied by the whistleblowers who leak classified documents. Its ambushes are its publications, given that the Empire never knows when they are coming or where they are coming from. And its camouflage is cryptography, preventing the empire from finding its sources or its servers.

WikiLeaks also achieves what Jackson called the first step into revolutionary consciousness: a forceful attack upon prestige. Prestige must be destroyed, Jackson insisted. People must see the venerated institutions and the omnipotent administrator actually under physical attack.

As a self-styled intelligence agency of the people, WikiLeaks publications have certainly attacked the prestige of the US government and its two-party duopoly. The Iraq and Afghanistan War Logs embarrassed the Pentagon, who, despite being supported by the most advanced technologies and funded with the largest budgets in world history, is losing a twenty-year-long war to loosely organized tribal factions in the hinterlands of central Asia. Cablegate embarrassed the US diplomatic apparatus by exposing the petty, conniving, backstabbing actions and statements of its best and brightest. And the DNC and Podesta emails of 2016 exposed a corrupt Democratic Party leadership to those who, for some reason, believed with sincerity that the party actually has their best interests in mind. Like all emancipatory journalism , WikiLeaks pulls back the curtain that shields power from the public gaze, allowing the people to see exactly how the imperial pigs make their sausage.

On September 23, 1941, George Jackson was born just as the second great war for colonial markets was beginning in the U.S. By the time he was 18 years old, he would be in prison on a one-year-to-life sentence for stealing $70 from a gas station. The District Attorney told him if he pleaded guilty, he would get a reduced sentence; instead, he spent the rest of his life behind bars. But Jackson knew that he was not imprisoned for theft, for his real crime was being a Black man living inside a racist Empire.

On April 11, 2019, Julian Assange was dragged out of the Ecuadorian embassy in London by United Kingdom police. The United States government has indicted Assange on seventeen counts under the Espionage Act and one count under the Computer Fraud and Abuse Act, for which he faces up to 175 years in a supermax prison. At 48-years-old, being convicted for even one-quarter of that would mean Assange would like likely die in prison. But we know that Assange is not being pursued for the theft of classified documents, for his real crime is doing the kind of journalism that exposes a racist Empire.

Patrick D. AndersonBlack Agenda Report

Patrick D. Anderson is a Visiting Assistant Professor of Philosophy at Grand Valley State University. His research focuses on the Anticolonial Tradition of Black Radical Thought and the connections between technology, ethics, and imperialism. He also contributes to Mint Press News. He can be reached at anderpat@gvsu.edu.

Did you find this article useful? Please consider supporting our work bydonatingorsubscribing.

Visit link:
On George Jackson and Julian Assange - LA Progressive

Artificial intelligence: How to invest – USA TODAY

The first big investment wave in tech was the personal computer. Then came software, the internet, smartphones, social media and cloud computing.

The next big thing is artificial intelligence, or AI,professional stock pickers say.

AI is the science-fiction-like technology in which computers are programmed to think and perform the tasks ordinarily done by humans.

The size of the global AI market is expected to grow to $202.6 billion by 2026, up from $20.7 billion in 2018, according to Fortune Business Insights. Funding of upstart AI companies by venture capitalists remains brisk. Last year, 956 deals valued at $13.5 billion took placethrough the third quarter,puttingAI deal activity on pace for another record year, according to PitchBook-NVCA Venture Monitor.

Artificial intelligence may one day take the wheel.(Photo: metamorworks / Getty Images)

Should you marry your money? After the wedding, should you marry your money in a joint account? Here are 3 approaches.

Apple store closed: Apple says it resolved outage that knocked out the App Store for some customers

Mike Lippert, manager of the Baron Opportunity fund, says AI touches more than half of the60-plus stock holdings in his mutual fund. Those stocks areall about innovation, transformation and disruption, three traits AI has in abundance.

I wont claim AI is in every stock in the portfolio, but its all over my portfolio, Lippert tells USA TODAY.

AI is creeping into every business, boosting productivity, customer service, sales, product innovation and operating efficiency. The technology is all about crunching reams of data from around the world, making sense of it and using the information to help businesses add services and operate more efficiently.

AI applications can be found in virtually every industry today, from marketing to health care to finance, Xiaomin Mou, IFCs senior investment officer, wrote in a report.

It's paving the road to driverless cars, making decisions such as what lane to drive in and when to stop. Its behind the software that tells salespeople which client prospect to call first. It's the brains behind virtual assistants that can interpret voice commands and play songs or provide weather updates.

There are not a lot of companies, especially if they are growing, that are not benefiting from AI in some ways, Lippert says.

The potential danger of AI, Lippert notes, is that advances such as autonomous driving and more sophisticated machine learning will take jobs from workers.

How can investors who want to get in early on the next Microsoft, Amazon, Apple or Facebook gain exposure to AI in a way that gives them the potential to profit over the long term without too much risk?

Investors must take a long-term approach and not just bet on one or two companies they think will emerge as big winners in AI, says Nidhi Gupta, technology sector leader at Fidelity Investments.

Diversification is really important, Gupta says, adding that investing in AI exposes investors to a wide range of outcomes.

In searching for AI winners, look for three things to unlock value, Gupta says.

1. Rich data sets thathelp create the algorithms and apps that make people's lives better.

2. Scaled computing power as big data centers with big servers are needed.

3. AI engineering talent to avoid brainpower bottlenecks.

Among the AI stocks to watch:

Big AI platforms: Leading AI players include well-known, large-cap tech stocks Google parent Alphabet (GOOGL), Amazon (AMZN) and Microsoft (MSFT). These three companies have the rich data sets, computing power and AI engineering talent that Gupta says arekey to success.

Chipmakers:Nvidias (NVDA) powerful and fast computer chips have been found effective for use in machine learning, AI training purposes, data centers and cloud-based computing. Another chipmaker with AI expertise is Xilinx (XLNX), says John Freeman, an analyst at Wall Street research firm CFRA.

Companies benefiting from AI:Many businesses, such as Salesforce (CRM), stand apart from their peers and competitors by integrating AI into their business, says Barons Lippert. Salesforce Einstein AI, for example, analyzes all types of customer data, ranging from emails to tweets, to better predict which sales leads will convert to new business, he says. Netflix (NFLX) uses AI to recommend shows and programming viewersmight like. Chinas online retailer Alibaba (BABA) uses AI to crunch every customer interaction to make the online sales process smoother. Electric-car maker Tesla (TSLA) uses AI to enable software that is the driving force behind autonomous cars.

Software makers:Other companies use AI to make software smarter and help solve business problems,Lippert says. Guidewire Software (GWRE), for example, uses AI to help insurers properly price policies, analyze risk, process submitted claims faster and identify insurance fraud. Adobe (ADBE) uses AI to analyze data to quickly identify cyberthreats. Datadog (DDOG) offers AI-inspired cloud monitoring services that letclients know if their web-based apps are behaving properly.

FICO (FICO) isbest-known for calculating consumer credit scores. Ituses AI to make sense of financial data to help clients, such as banks, determinethe credit worthiness of borrowers or help detectfraud, CFRAs Freeman says.

Investors who dont want to pick their own stockscan invest in a tech-focused mutual fund or an ETF that focuses specifically on AI. Some examples include iShares Robotics & Artificial Intelligence ETF (IRBO) and Global X Robotics & Artificial Intelligence ETF (BOTZ).

I do think AI is as significant an investing opportunity as the first era of computers, Lippert says.

Investors should expect bumps in the road investing in AI, Freeman warns.

This is a multi-decade trend, he says. AI is going to go through some mini-bubbles as well as some very healthy cycles.

Read or Share this story: https://www.usatoday.com/story/money/2020/01/27/artificial-intelligence-how-invest/4542467002/

Excerpt from:
Artificial intelligence: How to invest - USA TODAY

Artificial Intelligence | Computer Science

The name artificial intelligence covers a lot of disparate problem areas, united mainly by the fact that they involve complex inputs and outputs that are difficult to compute (or even check for correctness when supplied). One of the most interesting such areas is sensor-controlled behavior, in which a machine acts in the real world using information gathered from sensors such as sonars and cameras. This is a major focus of A.I. research at Yale.

The difference between sensor-controlled behavior and what computers usually do is that the input from a sensor is ambiguous. When a computer reads a record from a database, it can be certain what the record says. There may be philosophical doubt about whether an employees social-security number really succeeds in referring to a flesh-and-blood employee but such doubts dont affect how programs are written. As far as the computer system is concerned, the identifying number is the employee, and it will happily, and successfully, use it to access all relevant data as long as no internal inconsistency develops.

Contrast that with a computer controlling a soccer-playing robot, whose only sensor is a camera mounted above the field. The camera tells the computer, several times per second, the pattern of illumination it is receiving encoded as an array of numbers. (Actually, its three arrays, one for red, one for green, and one for blue.) The vision system must extract from this large set of numbers the locations of all the robots (on its team and the opponents) plus the ball. What it extracts is not an exact description, but always noisy, and occasionally grossly wrong. In addition, by the time the description is available it is always slightly out of date. The computer must decide quickly how to alter the behavior of the robots, send them messages to accomplish that, and then process the next image.

One might wonder why we choose to work in such a perversely difficult area. There are two obvious reasons: First, one ultimate goal of A.I. research is to understand how people are possiblei.e., how it is that an intelligent system can thrive in the real world. Our vision and other senses are so good that we can sometimes overlook the noise and errors they are prone to, when in fact we are faced with problems that are similar to the robot-soccer player, but much worse. We will never understand human intelligence until we understand how the human brain extracts information from its environment, and uses it to guide behavior.

Second, vision and robotics have many practical applications. Space exploration is more cost-effective when robots are the vanguard, as demonstrated dramatically by the Mars Rover mission of 1997. Closer to home, we are already seeing commercially viable applications of the technology. For instance, TV networks can now produce three-dimensional views of an athletic event, by combining several two-dimensional views, in essentially the same way animals manage stereo vision. There is now a burgeoning robotic-toy industry, and we can expect robots to appear in more complex roles in our lives. So far, the behaviors these robots can exhibit are quite primitive. Kids are satisfied with a robot that can utter a few phrases or wag its tail when hugged. But it quickly becomes clear even to a child that todays toys are not really aware of what is going on around them. The main problem in making them aware is to provide them with better sensors, which means better algorithms for processing the outputs from the sensors.

Research in this area at Yale is carried out by the Center for Computational Vision and Control, a joint effort of the Departments of Computer Science, Electrical Engineering, and Radiology. We will describe three of the ongoing projects in this area.

View original post here:
Artificial Intelligence | Computer Science

What is AI? Artificial Intelligence Tutorial for Beginners

What is AI?

A machine with the ability to perform cognitive functions such as perceiving, learning, reasoning and solve problems are deemed to hold an artificial intelligence.

Artificial intelligence exists when a machine has cognitive ability. The benchmark for AI is the human level concerning reasoning, speech, and vision.

In this basic tutorial, you will learn-

Nowadays, AI is used in almost all industries, giving a technological edge to all companies integrating AI at scale. According to McKinsey, AI has the potential to create 600 billions of dollars of value in retail, bring 50 percent more incremental value in banking compared with other analytics techniques. In transport and logistic, the potential revenue jump is 89 percent more.

Concretely, if an organization uses AI for its marketing team, it can automate mundane and repetitive tasks, allowing the sales representative to focus on tasks like relationship building, lead nurturing, etc. A company name Gong provides a conversation intelligence service. Each time a Sales Representative make a phone call, the machine records transcribes and analyzes the chat. The VP can use AI analytics and recommendation to formulate a winning strategy.

In a nutshell, AI provides a cutting-edge technology to deal with complex data which is impossible to handle by a human being. AI automates redundant jobs allowing a worker to focus on the high level, value-added tasks. When AI is implemented at scale, it leads to cost reduction and revenue increase.

Artificial intelligence is a buzzword today, although this term is not new. In 1956, a group of avant-garde experts from different backgrounds decided to organize a summer research project on AI. Four bright minds led the project; John McCarthy (Dartmouth College), Marvin Minsky (Harvard University), Nathaniel Rochester (IBM), and Claude Shannon (Bell Telephone Laboratories).

The primary purpose of the research project was to tackle "every aspect of learning or any other feature of intelligence that can in principle be so precisely described, that a machine can be made to simulate it."

The proposal of the summits included

It led to the idea that intelligent computers can be created. A new era began, full of hope - Artificial intelligence.

Artificial intelligence can be divided into three subfields:

Machine learning is the art of study of algorithms that learn from examples and experiences.

Machine learning is based on the idea that there exist some patterns in the data that were identified and used for future predictions.

The difference from hardcoding rules is that the machine learns on its own to find such rules.

Deep learning is a sub-field of machine learning. Deep learning does not mean the machine learns more in-depth knowledge; it means the machine uses different layers to learn from the data. The depth of the model is represented by the number of layers in the model. For instance, Google LeNet model for image recognition counts 22 layers.

In deep learning, the learning phase is done through a neural network. A neural network is an architecture where the layers are stacked on top of each other.

Most of our smartphone, daily device or even the internet uses Artificial intelligence. Very often, AI and machine learning are used interchangeably by big companies that want to announce their latest innovation. However, Machine learning and AI are different in some ways.

AI- artificial intelligence- is the science of training machines to perform human tasks. The term was invented in the 1950s when scientists began exploring how computers could solve problems on their own.

Artificial Intelligence is a computer that is given human-like properties. Take our brain; it works effortlessly and seamlessly to calculate the world around us. Artificial Intelligence is the concept that a computer can do the same. It can be said that AI is the large science that mimics human aptitudes.

Machine learning is a distinct subset of AI that trains a machine how to learn. Machine learning models look for patterns in data and try to conclude. In a nutshell, the machine does not need to be explicitly programmed by people. The programmers give some examples, and the computer is going to learn what to do from those samples.

AI has broad applications-

AI is used in all the industries, from marketing to supply chain, finance, food-processing sector. According to a McKinsey survey, financial services and high tech communication are leading the AI fields.

A neural network has been out since the nineties with the seminal paper of Yann LeCun. However, it started to become famous around the year 2012. Explained by three critical factors for its popularity are:

Machine learning is an experimental field, meaning it needs to have data to test new ideas or approaches. With the boom of the internet, data became more easily accessible. Besides, giant companies like NVIDIA and AMD have developed high-performance graphics chips for the gaming market.

Hardware

In the last twenty years, the power of the CPU has exploded, allowing the user to train a small deep-learning model on any laptop. However, to process a deep-learning model for computer vision or deep learning, you need a more powerful machine. Thanks to the investment of NVIDIA and AMD, a new generation of GPU (graphical processing unit) are available. These chips allow parallel computations. It means the machine can separate the computations over several GPU to speed up the calculations.

For instance, with an NVIDIA TITAN X, it takes two days to train a model called ImageNet against weeks for a traditional CPU. Besides, big companies use clusters of GPU to train deep learning model with the NVIDIA Tesla K80 because it helps to reduce the data center cost and provide better performances.

Data

Deep learning is the structure of the model, and the data is the fluid to make it alive. Data powers the artificial intelligence. Without data, nothing can be done. Latest Technologies have pushed the boundaries of data storage. It is easier than ever to store a high amount of data in a data center.

Internet revolution makes data collection and distribution available to feed machine learning algorithm. If you are familiar with Flickr, Instagram or any other app with images, you can guess their AI potential. There are millions of pictures with tags available on these websites. Those pictures can be used to train a neural network model to recognize an object on the picture without the need to manually collect and label the data.

Artificial Intelligence combined with data is the new gold. Data is a unique competitive advantage that no firm should neglect. AI provides the best answers from your data. When all the firms can have the same technologies, the one with data will have a competitive advantage over the other. To give an idea, the world creates about 2.2 exabytes, or 2.2 billion gigabytes, every day.

A company needs exceptionally diverse data sources to be able to find the patterns and learn and in a substantial volume.

Algorithm

Hardware is more powerful than ever, data is easily accessible, but one thing that makes the neural network more reliable is the development of more accurate algorithms. Primary neural networks are a simple multiplication matrix without in-depth statistical properties. Since 2010, remarkable discoveries have been made to improve the neural network

Artificial intelligence uses a progressive learning algorithm to let the data do the programming. It means, the computer can teach itself how to perform different tasks, like finding anomalies, become a chatbot.

Summary

Artificial intelligence and machine learning are two confusing terms. Artificial intelligence is the science of training machine to imitate or reproduce human task. A scientist can use different methods to train a machine. At the beginning of the AI's ages, programmers wrote hard-coded programs, that is, type every logical possibility the machine can face and how to respond. When a system grows complex, it becomes difficult to manage the rules. To overcome this issue, the machine can use data to learn how to take care of all the situations from a given environment.

The most important features to have a powerful AI is to have enough data with considerable heterogeneity. For example, a machine can learn different languages as long as it has enough words to learn from.

AI is the new cutting-edge technology. Ventures capitalist are investing billions of dollars in startups or AI project. McKinsey estimates AI can boost every industry by at least a double-digit growth rate.

View post:
What is AI? Artificial Intelligence Tutorial for Beginners

4 Main Types of Artificial Intelligence – G2

Although AI is undoubtedly multifaceted, there are specific types of artificial intelligence under which extended categories fall.

What are the four types of artificial intelligence?

There are a plethora of terms and definitions in AI that can make it difficult to navigate the difference between categories, subsets, or types of artificial intelligence and no, theyre not all the same. Some subsets of AI include machine learning, big data, and natural language processing (NLP); however, this article covers the four main types of artificial intelligence: reactive machines, limited memory, theory of mind, and self-awareness.

These four types of artificial intelligence comprise smaller aspects of the general realm of AI.

Reactive machines are the most basic type of AI system. This means that they cannot form memories or use past experiences to influence present-made decisions; they can only react to currently existing situations hence reactive. An existing form of a reactive machine is Deep Blue, a chess-playing supercomputer created by IBM in the mid-1980s.

Deep Blue was created to play chess against a human competitor with intent to defeat the competitor. It was programmed with the ability to identify a chess board and its pieces while understanding the pieces functions. Deep Blue could make predictions about what moves it should make and the moves its opponent might make, thus having an enhanced ability to predict, select, and win. In a series of matches played between 1996 and 1997, Deep Blue defeated Russian chess grandmaster Garry Kasparov 3 to 2 games, becoming the first computerized program to defeat a human opponent.

Deep Blues unique skill of accurately and successfully playing chess matches highlight its reactive abilities. In the same vein, its reactive mind also indicates that it has no concept of past or future; it only comprehends and acts on the presently-existing world and components within it. To simplify, reactive machines are programmed for the here and now, but not the before and after.

Reactive machines have no concept of the world and therefore cannot function beyond the simple tasks for which they are programmed. A characteristic of reactive machines is that no matter the time or place, these machines will always behave the way they were programmed. There is no growth with reactive machines, only stagnation in recurring actions and behaviors.

Limited memory is comprised of machine learning models that derive knowledge from previously-learned information, stored data, or events. Unlike reactive machines, limited memory learns from the past by observing actions or data fed to them in order to build experiential knowledge.

Although limited memory builds on observational data in conjunction with pre-programmed data the machines already contain, these sample pieces of information are fleeting. An existing form of limited memory is autonomous vehicles.

Autonomous vehicles, or self-driving cars, use the principle of limited memory in that they depend on a combination of observational and pre-programmed knowledge. To observe and understand how to properly drive and function among human-dependent vehicles, self-driving cars read their environment, detect patterns or changes in external factors, and adjust as necessary.

Not only do autonomous vehicles observe their environment, but they also observe the movement of other vehicles and people in their line of vision. Previously, driverless cars without limited memory AI took as long as 100 seconds to react and make judgments on external factors. Since the introduction of limited memory, reaction time on machine-based observations has dropped sharply, depicting the value of limited memory AI.

GIF courtesy of ProStock/Getty via Tesla

What constitutes theory of mind is decision-making ability equal to the extent of a human mind, but by machines. While there are some machines that currently exhibit humanlike capabilities (voice assistants, for instance), none are fully capable of holding conversations relative to human standards. One component of human conversation is having emotional capacity, or sounding and behaving like a person would in standard conventions of conversation.

This future class of machine ability would include understanding that people have thoughts and emotions that affect behavioral output and thus influence a theory of mind machines thought process. Social interaction is a key facet of human interaction, so to make theory of mind machines tangible, the AI systems that control the now-hypothetical machines would have to identify, understand, retain, and remember emotional output and behaviors while knowing how to respond to them.

From this, said theory of mind machines would have to be able to use the information derived from people and adapt it into their learning centers to know how to communicate with and treat different situations. Theory of mind is a highly advanced form of proposed artificial intelligence that would require machines to thoroughly acknowledge rapid shifts in emotional and behavioral patterns in humans, and also understand that human behavior is fluid; thus, theory of mind machines would have to be able to learn rapidly at a moments notice.

Some elements of theory of mind AI currently exist or have existed in the recent past. Two notable examples are the robots Kismet and Sophia, created in 2000 and 2016, respectively.

Kismet, developed by Professor Cynthia Breazeal, was capable of recognizing human facial signals (emotions) and could replicate said emotions with its face, which was structured with human facial features: eyes, lips, ears, eyebrows, and eyelids.

Sophia, on the other hand, is a humanoid bot created by Hanson Robotics. What distinguishes her from previous robots is her physical likeness to a human being as well as her ability to see (image recognition) and respond to interactions with appropriate facial expressions.

GIF courtesy of GIPHY

These two humanlike robots are samples of movement toward full theory of mind AI systems materializing in the near future. While neither fully holds the ability to have full-blown human conversation with an actual person, both robots have aspects of emotive ability akin to that of their human counterparts one step toward seamlessly assimilating into human society.

Self-aware AI involves machines that have human-level consciousness. This form of AI is not currently in existence, but would be considered the most advanced form of artificial intelligence known to man.

Facets of self-aware AI include the ability to not only recognize and replicate humanlike actions, but also to think for itself, have desires, and understand its feelings. Self-aware AI, in essence, is an advancement and extension of theory of mind AI. Where theory of mind only focuses on the aspects of comprehension and replication of human practices, self-aware AI takes it a step further by implying that it can and will have self-guided thoughts and reactions.

We are presently in tier three of the four types of artificial intelligence, so believing that we could potentially reach the fourth (and final?) tier of AI doesnt seem like a far-fetched idea.

But for now, its important to focus on perfecting all aspects of types two and three in AI. Sloppily speeding through each AI tier could be detrimental to the future of artificial intelligence for generations to come.

TIP: Find out what AI software currently exists today, and see how it can help with your business processes.

Ready to learn more in-depth information about artificial intelligence? Check out articles on the benefits and risks of AI as well as the innovative minds behind the first genderless voice assistant!

Here is the original post:
4 Main Types of Artificial Intelligence - G2

A Brief History of Artificial Intelligence | Live Science

The idea of inanimate objects coming to life as intelligent beings has been around for a long time. The ancient Greeks had myths about robots, and Chinese and Egyptian engineers built automatons.

The beginnings of modern AI can be traced to classical philosophers' attempts to describe human thinking as a symbolic system. But the field of AI wasn't formally founded until 1956, at a conference at Dartmouth College, in Hanover, New Hampshire, where the term "artificial intelligence" was coined.

MIT cognitive scientist Marvin Minsky and others who attended the conference were extremely optimistic about AI's future. "Within a generation[...] the problem of creating 'artificial intelligence' will substantially be solved," Minsky is quoted as saying in the book "AI: The Tumultuous Search for Artificial Intelligence" (Basic Books, 1994). [Super-Intelligent Machines: 7 Robotic Futures]

But achieving an artificially intelligent being wasn't so simple. After several reports criticizing progress in AI, government funding and interest in the field dropped off a period from 197480 that became known as the "AI winter." The field later revived in the 1980s when the British government started funding it again in part to compete with efforts by the Japanese.

The field experienced another major winter from 1987 to 1993, coinciding with the collapse of the market for some of the early general-purpose computers, and reduced government funding.

But research began to pick up again after that, and in 1997, IBM's Deep Blue became the first computer to beat a chess champion when it defeated Russian grandmaster Garry Kasparov. And in 2011, the computer giant's question-answering system Watson won the quiz show "Jeopardy!" by beating reigning champions Brad Rutter and Ken Jennings.

This year, the talking computer "chatbot" Eugene Goostman captured headlines for tricking judges into thinking he was real skin-and-blood human during a Turing test, a competition developed by British mathematician and computer scientist Alan Turing in 1950 as a way to assess whether a machine is intelligent.

But the accomplishment has been controversial, with artificial intelligence experts saying that only a third of the judges were fooled, and pointing out that the bot was able to dodge some questions by claiming it was an adolescent who spoke English as a second language.

Manyexperts now believe the Turing test isn't a good measure of artificial intelligence.

"The vast majority of people in AI who've thought about the matter, for the most part, think its a very poor test, because it only looks at external behavior," Perlis told Live Science.

In fact, some scientists now plan to develop an updated version of the test. But the field of AI has become much broader than just the pursuit of true, humanlike intelligence.

Follow Tanya Lewis on Twitterand Google+. Follow us @livescience, Facebook& Google+. Original article onLive Science.

Original post:
A Brief History of Artificial Intelligence | Live Science

Artificial Intelligence Quotes (391 quotes)

Why give a robot an order to obey orderswhy aren't the original orders enough? Why command a robot not to do harmwouldn't it be easier never to command it to do harm in the first place? Does the universe contain a mysterious force pulling entities toward malevolence, so that a positronic brain must be programmed to withstand it? Do intelligent beings inevitably develop an attitude problem? () Now that computers really have become smarter and more powerful, the anxiety has waned. Today's ubiquitous, networked computers have an unprecedented ability to do mischief should they ever go to the bad. But the only mayhem comes from unpredictable chaos or from human malice in the form of viruses. We no longer worry about electronic serial killers or subversive silicon cabals because we are beginning to appreciate that malevolencelike vision, motor coordination, and common sensedoes not come free with computation but has to be programmed in. () Aggression, like every other part of human behavior we take for granted, is a challenging engineering problem! Steven Pinker, How the Mind Works

See the original post here:
Artificial Intelligence Quotes (391 quotes)

A New Way To Think About Artificial Intelligence With This ETF – MarketWatch

Among the myriad thematic exchange traded funds investors have to consider, artificial intelligence products are numerous and some are catching on with investors.

Count the ROBO Global Artificial Intelligence ETF THNQ, +1.57% as the latest member of the artificial intelligence ETF fray. HNQ, which debuted earlier this week, comes from a good gene pool as its stablemate,the Robo Global Robotics and Automation Index ETF ROBO, +0.44%, was the original and remains one of the largest robotics ETFs.

That's relevant because artificial intelligence and robotics are themes that frequently intersect with each other. Home to 72 stocks, the new THNQ follows the ROBO Global Artificial Intelligence Index.

Adding to the case for A.I., even with a new product such as THNQ, is that the technology has hundreds, if not thousands, of applications supporting its growth.

Companies developing AV technology are mainly relying on machine learning or deep learning, or both, according to IHS Markit. A major difference between machine learning and deep learning is that, while deep learning can automatically discover the feature to be used for classification in unsupervised exercises, machine learning requires these features to be labeled manually with more rigid rulesets. In contrast to machine learning, deep learning requires significant computing power and training data to deliver more accurate results.

Like its family ROBO, THNQ offers wide reach with exposure to 11 sub-groups. Those include big data, cloud computing, cognitive computing, e-commerce and other consumer angles and factory automation, among others. Of course, semiconductors are part of the THNQ fold, too.

The exploding use of AI is ushering in a new era of semiconductor architectures and computing platforms that can handle the accelerated processing requirements of an AI-driven world, according to ROBO Global. To tackle the challenge, semiconductor companies are creating new, more advanced AI chip engines using a whole new range of materials, equipment, and design methodologies.

While THNQ is a new ETF, investors may do well to not focus on that rather focus on the fact the AI boom is in its nascent stages.

Historically, the stock market tends to under-appreciate the scale of opportunity enjoyed by leading providers of new technologies during this phase of development, notes THNQ's issuer. This fact creates a remarkable opportunity for investors who understand the scope of the AI revolution, and who take action at a time when AI is disrupting industry as we know it and forcing us to rethink the world around us.

The new ETF charges 0.68% per year, or $68 on a $10,000 investment. That's inline with rival funds.

2020 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.

Read the original here:
A New Way To Think About Artificial Intelligence With This ETF - MarketWatch

Artificial Intelligence in Cancer: How Is It Used in Practice? – Cancer Therapy Advisor

Artificialintelligence (AI) comprises a type of computer science that develops entities,such as software programs, that can intelligently perform tasks or makedecisions.1 The development and use of AI in health care is not new;the first ideas that created the foundation of AI were documented in 1956, andautomated clinical tools that were developed between the 1970s and 1990s arenow in routine use. These tools, such as the automated interpretation ofelectrocardiograms, may seem simple, but are considered AI.

Today,AI is being harnessed to help with big problems in medicine such asprocessing and interpreting large amounts of data in research and in clinicalsettings, including reading imaging or results from broad genetic-testingpanels.1 In oncology, AI is not yet being used broadly, but its useis being studied in several areas.

Screeningand Diagnosis

Thereare several AI platforms approved by the US Food and Drug Administration (FDA)to assist in the evaluation of medical imaging, including for identifyingsuspicious lesions that may be cancer.2 Some platforms help tovisualize and manipulate images from magnetic resonance imaging (MRI) orcomputed tomography (CT) and flag suspicious areas. For example, there are severalAI platforms for evaluating mammography images and, in some cases, help todiagnose breast abnormalities. There is also an AI platform that helps toanalyze lung nodules in individuals who are being screened for lung cancer.1,3

AI isalso being studied in other areas of cancer screening and diagnosis. Indermatology, skin lesions are biopsied based on a dermatologists or primarycare providers assessment of the appearance of the lesion.1 Studiesare evaluating the use of AI to either supplement or replace the work of theclinician, with the ultimate goal of making the overall process moreefficient.

Big Data

Astechnology has improved, we now have the ability to create a vast amount ofdata. This highlights a challenge individuals have limited capabilities toassess large chunks of data and identify meaningful patterns. AI is beingdeveloped and used to help mine these data for important findings, process andcondense the information the data represent, and look for meaningful patterns.

Such toolswould be useful in the research setting, as scientists look for novel targetsfor new anticancer therapies or to further their understanding of underlyingdisease processes. AI would also be useful in the clinical setting, especiallynow that electronic health records are being used and real-world data are beinggenerated from patients.

View post:
Artificial Intelligence in Cancer: How Is It Used in Practice? - Cancer Therapy Advisor