Seizing Artificial Intelligence’s Opportunities in the 2020s – AiThority

Artificial Intelligence (AI) has made major progress in recent years. But even milestones like AlphaGo or the narrow AI used by big tech only scratch the surface of the seismic changes yet to come.

Modern AI holds the potential to upend entire profession while unleashing brand new industries in the process. Old assumptions will no longer hold, and new realities will dictate those who are swallowed by the tides of change from those able to anticipate and ride the AI wave headlong into a prosperous future.

Heres how businesses and employees can both leverage AI in the 2020s.

Like many emerging technologies, AI comes with a substantial learning curve. As a recent McKinsey report highlights, AI is a slow burn technology that requires a heavy upfront investment, with returns only ramping up well down the road.

Because of this slow burn, an AI front-runner and an AI laggard may initially appear to be on equal footing. The front-runner may even be a bit behind during early growing pains. But as the effects of AI adoption kick in, the gap between the two widens dramatically and exponentially. McKinseys models estimate that within around 10 years, the difference in cumulative net change in cash flow between front-runners and laggards could be as high as 145 percent.

The first lesson for any business hoping to seize new AI opportunities is to start making moves to do so right now.

Read More: How is Artificial Intelligence (AI) Changing the Future of Architecture?

Despite popular opinion, the coming AI wave will be mostly a net positive for employees. The World Economic Forum found that by 2022, AI and Machine Learning will have created over 130 million new jobs. Though impressive, these gains will not be distributed evenly.

Jobs characterized by unskilled and repetitive tasks face an uncertain future, while jobs in need of greater social and creative problem-solving will spike. According to McKinsey, the coming decade could see a 10 percent fall in the share of low digital skill jobs, with a corresponding rise in the share of jobs requiring high digital skill.

So how can employees successfully navigate the coming future of work? One place to start is to investigate the past. Nearly half a century ago, the first ATM was installed outside Barclays Bank in London. In 1967, the thought of bank tellers surviving the introduction of automated teller machines felt impossible. ATMs caught on like wildfire, cut into tellers hours, offered unbeatable flexibility and convenience, and should have all but wiped tellers out.

But, in fact, exactly the opposite happened. No longer having to handle simple deposits freed tellers up to engage with more complex and social facets of the business. They started advising customers on mortgages and loans, forging relationships and winning loyalty. Most remarkable of all, in the years following the ATMs introduction, the total number of tellers employed worldwide didnt fall off a cliff. In fact, it rose higher than ever.

Though AI could potentially threaten some types of jobs, many jobs will see rising demand. Increased reliance on automated systems for core business functions, frees up valuable employee time and enables them to focus on different areas to add even more value to the company.

As employees grow increasingly aware of the changing nature of work, they are also clamoring for avenues for development, aware that they need to hold a variety of skills to remain relevant in a dynamic job market. Companies will, therefore, need to provide employees with a wide range of experiences and the opportunity to continuously enhance their skillsets or suffer high turnover. This is already a vital issue to businesses with the cost of losing an employee equating to 90%-200% of their annual salary. This costs each large enterprise an estimated $400 million a year. If employees feel their role is too restrictive or that their organization is lagging, their likelihood of leaving will climb.

The only way to capture the full value of AI for business is to retain the highly skilled employees capable of wielding it. Departmental silos and rigid job descriptions will have no place in the AI future.

Read More: How Artificial Intelligence and Blockchain is Revolutionizing Mobile Industry in 2020?

For employees to maximize their chances of success in the face of rapid AI advancement, they must remain flexible and continuously acquire new skills. Both businesses and employees will need to realign their priorities in accordance with new realities. Workers will have to be open to novel ideas and perspectives, while employers will need to embrace the latest technological advancements.

Fortunately, the resources and avenues for ambitious employers to pursue continued growth for their employees are blossoming. Indeed, the very AI advancements prompting the need for accelerated career development paths are also powering technologies to maximize and optimize professional enrichment.

AI is truly unlocking an exciting new future of work. Smart algorithms now enable hyper-flexible workplaces to seamlessly shuffle and schedule employee travel, remote work, and mentorship opportunities. At the cutting edge, these technologies can even let employees divide their time between multiple departments across their organization. AI can also tailor training and reskilling programs to each employees unique goals and pace.

The rise of AI holds the promise of great change, but if properly managed, it can be a change for the better.

Read More: Predictions of AI AdTech in 2020

See the rest here:
Seizing Artificial Intelligence's Opportunities in the 2020s - AiThority

The Ethical Upside to Artificial Intelligence – War on the Rocks

According to some, artificial intelligence (AI) is thenew electricity. Like electricity, AI will transform every major industry and open new opportunities that were never possible. However, unlike electricity, the ethics surrounding the development and use of AI remain controversial, which is a significant element constraining AIs full potential.

The Defense Innovation Board (DIB) released a paper in October 2019 that recommends the ethical use of AI within the Defense Department. It described five principles of ethically used AI responsible, equitable, traceable, reliable, and governable. The paper also identifies measures the Joint Artificial Intelligence Center, Defense Agency Research Projects Agency (DARPA), and U.S. military branches are taking to study the ethical, moral, and legal implications of employing AI. While the paper primarily focused on the ethics surrounding the implementation and use of AI, it also argued that AI must have the ability to detect and avoid unintended harm. This article seeks to expand on that idea by exploring AIs ability to operate within the Defense Department using an ethical framework.

Designing an ethical framework a set of principles that guide ethical choice for AI, while difficult, offers a significant upside for the U.S. military. It can strengthen the militarys shared moral system, enhance ethical considerations, and increase the speed of decision-making in a manner that provides decision superiority over adversaries.

AI Is Limited without an Ethical Framework

Technology is increasing the complexity and speed of war. AI, the use of computers to perform tasks normally requiring human intelligence, can be a means of speeding decision-making. Yet, due to a fear of machines inability to consider ethics in decisions, organizations are limiting AIs scope to focus ondata-supported decision-making using AI to summarize data while keeping human judgment as the central processor. For example, leaders within the automotive industry received backlash for programming self-driving cars to make ethical judgments. Some professional driving organizations have demanded that these cars be banned from the roads for at least 50 years.

This backlash, while understandable, misses the substantial upside that AI can offer to ethical decision-making. AI reflectshuman inputand operates on human-designed algorithms that set parameters for the collection and correlation of data to facilitate machine learning. As a result, it is possible to build an ethical framework that reflects a decision-makers values. Of course, when the data that humans supply is biased, for example, AI can mimic its trainers bydiscriminating on gender and race. Biased algorithms, to be sure, are a drawback. However, bias can be mitigated by techniques such as counterfactual fairness, Google AIs recommended practices, and algorithms such as those provided by IBMs AI Fairness 360 toolkit. Moreover, AI processing power makes it essential for successfully navigating ethical dilemmas in a military setting, where complexity and time pressure often obscure underlying ethical tensions.

A significant obstacle to building an ethical framework for AI is a fundamental element of war the trade-off between human lives and other military objectives. While international humanitarian law provides a codification of actions, many of which have ethical implications, it does not answer all questions related to combat. It primarily focuses on defining combatants, the treatment of combatants and non-combatants, and acceptable weapons. International humanitarian law does not deal with questions concerning how many civilian deaths are acceptable for killing a high-valued target, or how many friendly lives are worth sacrificing to take control of a piece of territory. While, under international law, these are examples of military judgments, this remains an ethical decision for the military leader responsible.

Building ethical frameworks into AI will help the military comply with international humanitarian law and leverage new opportunities while predicting and preventing costly mistakes in four ways.

Four Military Benefits of an Ethical AI Framework

Designing an ethical framework for AI will benefit the military by forcing its leaders to reexamine existing ethical frameworks. In order to supply the benchmark data on which AI can learn, leaders will need to define, label, and score choice options in ethical dilemmas. In doing so they will have three primary theoretical frameworks to leverage for guidance: consequentialist, deontological, and virtue. While consequentialist ethical theories focus on the consequences of the decision (e.g., expected lives saved), deontological ethical theories are concerned with the compliance with a system of rules (refusing to lie based on personal beliefs and values despite the possible outcomes). Virtue ethical theories are concerned with instilling the right amount of a virtuous quality into a person (too little courage is cowardice; too much is rashness; the right amount is courage). A common issue cited as anobstacle to machine ethicsis the lack of agreement on which theory or combination of theories to follow leaders will have to overcome this obstacle. This introspection will help them better understand their ethical framework, clarify and strengthen the militarys shared moral system, andenhance human agency.

Second, AI can recommend decisions that consistently reflect a leaders preferred ethical decision-making process. Even in high-stakes situations, human decision-making is prone to influence from factors that have little or nothing to do with the underlying choice. Things like poor nutrition, fatigue, and stress all common in warfare can lead to biased and inconsistent decision-making. Other influences, such as acting in ones self-interest or extreme emotional responses, can also contribute tomilitary members making unethical decisions. AI, of course, does not become fatigued or emotional. The consistency of AI allows it to act as a moral adviser by providing decision-makers morally relevant data leaders can rely on as their judgment becomes impaired. Overall, this can increase the confidence of young decision-makers, a concern thecommander of U.S. Army Training and Doctrine Commandbrought up early last year.

Third, AI can help ensure that U.S. military leaders make the right ethical choice however they define that in high-pressure situations. Overwhelming the adversary is central to modern warfare. Simultaneous attacks anddeception operationsaim to confuse decision-makers to the point where they can no longer use good judgment. AI can process and correlate massive amounts of data to provide not only response options, but also probabilities that a given option will result in an ethically acceptable outcome. Collecting battlefield data, processing the information, and making an ethical decision is very difficult for humans in a wartime environment. Although the task would still be extremely difficult, AI can gather and process information more efficiently than humans. This would be valuable for the military. For example, AI that is receiving and correlating information from sensors across the entire operating area could estimate non-combatant casualties, the proportionality of an attack, or social reactions from observing populations.

Finally, AI can also extend the time allowed to make ethical decisions in warfare. For example, a central concern in modern military fire support is the ability to outrange the opponent, to be able to shoot without being shot. The race to extend the range of weapons to outpace adversaries continues to increase the time between launch and impact. Future warfare will see weapons that are launched and enter an area that is so heavily degraded and contested that the weapon will lose external communication with the decision-maker who chose to fire it. Nevertheless, as the weapon moves closer to the target, it could gain situational awareness on the target area and identify changes pertinent to the ethics of striking a target. If equipped with onboard AI operating with an ethical framework, the weapon could continuously collect, correlate, and assess the situation throughout its flight to meet the parameters of its programmed framework. If the weapon identified a change in civilian presence or other information altering the legitimacy of a target, the weapon could divert to a secondary target, locate a safe area to self-detonate, or deactivate its fuse. This concept could apply to any semi- or fully autonomous air, ground, maritime, or space assets. The U.S. military could not afford a weapon system deactivating or returning to base in future conflicts each time it loses communication with a human. If an AI-enabled weapon loses the ability to receive human input, for whatever reason, an ethical framework will allow the mission to continue in a manner that aligns the weapons actions with the intent of the operator.

Conclusion

Building an ethical framework for AI will help clarify and strengthen the militarys shared moral system. It will allow AI to act as a moral adviser and provide feedback as the judgment of decision-makers becomes impaired. Similarly, an ethical framework for AI will maximize the utility of its processing power to help ensure ethical decisions when human cognition is overwhelmed. Lastly, providing AI an ethical framework can extend the time available to make ethical decisions. Of course, AI is only as good as the data it is provided.

AI should not replace U.S. military leaders as ethical decision-makers. Instead, if correctly designed, AI should clarify and amplify the ethical frameworks that U.S. military leaders already bring to war. It should help leaders grapple with their own moral frameworks, and help bring those frameworks to bear by processing more data than any decision-maker could, in places where no decision-maker could go.

AI may create new programming challenges for the military, but not new ethical challenges. Grappling with the ethical implications of AI will help leaders better understand moral tradeoffs inherent in combat. This will unleash the full potential of AI, and allow it to increase the speed of U.S. decision-making to a rate that outpaces its adversaries.

Ray Reeves is a captain in the U.S. Air Force and a tactical air control party officer and joint terminal attack controller (JTAC) instructor and evaluator at the 13thAir Support Operations Squadron on Fort Carson, Colorado. He has multiple combat deployments and is a doctoral student at Indiana Wesleyan University, where he studies organizational leadership. The views expressed here are his alone and do not necessarily reflect those of the U.S. government or any part thereof. Linkedin.

Image: U.S. Marine Corps (Photo by Lance Cpl. Nathaniel Q. Hamilton)

See more here:
The Ethical Upside to Artificial Intelligence - War on the Rocks

How Automation and Artificial Intelligence Can Boost Cybersecurity – Robotics and Automation News

Cybercriminals are always evolving their efforts and coming up with more advanced ways to target their victims. And while there are many tools available to stop them, there is a lot of space for improvement. Especially if you take automation into account.

Machine learning and artificial intelligence are playing a significant role in cybersecurity. Automation tools can prevent, detect, and deal with tons of cyber threats way more efficiently and faster than humans. And it will continue to expand down the road. To that end, heres a quick look at the significant differences AI/ML technologies can make to corporate cybersecurity approaches.

Mitigating the risks posed by omnipresent technology

Technology has permeated every facet of personal and work lives. Above anything else, it increased their attack surface. And it has become a massive problem for companies in recent years they have to account for many applications and devices.

The problem is, there arent enough skilled human resources to contend with all those security risks. Thats why it often results in gaping vulnerabilities.

To add to that problem, many companies cannot afford having cybersecurity teams needed to secure their applications and systems. Startups, in particular, are at risk. They lack established security operations and the funds to ensure them.

Companies need to automate at least some of the processes necessary to protect their systems and devices from outside attacks. Otherwise, they stay vulnerable.

Criminals are using every tool at their disposal to make sure they have as many points of entry as possible. For example, not even firewalls can protect a system like they used to before, as criminals keep inventing new ways to get around them.

Theres no way to manually contend with this because theyre using automated methods to test the defenses of every connected device.

Better threat detection and management

The size of attacks and vast amounts of data available to analyze makes keeping up with the latest threats a challenging task. Automated machine learning applications are much more suited to constant vigilance and systematic threat identification.

These systems are learning all the time. They can evolve alongside growing threat vectors to spot unusual behaviors. It allows them to identify and process sophisticated attack methods.

But most companies are not making use of these game-changing technologies. They continue to rely on outdated methods. Yet conventional tools and applications cannot keep up with ill-intentioned actors. They keep leveraging more complicated capabilities in their attacks.

Cases such as the Outlaw cryptojacking attacks prove that hackers know how to use new technology to avoid detection. And they are quite successful in their endeavors. The only way to cope with such an onslaught of threats is through machine learning/artificial intelligence engines. They overlook the systems and alert about any suspicious and unusual behaviour.

Automating mundane cybersecurity processes

Many tools exist to cover the security needs of businesses. For example, most companies ask their employees to use virtual private networks (VPNs). (What is a VPN? Its a service that encrypts users connections to the internet (https://nordvpn.com/what-is-a-vpn/).

A tool like that makes sure outsiders cant intercept any data user is transferring over the network.) And while that covers the data in transfer, theres still a risk employees will fall for phishing emails or install ransomware by accident.

Security researchers cannot keep up with the threat alert notification overload. And many of these notifications are usually false. But you cant ignore them. Criminals know how to hide in all that noise. It makes threat identification a monumental task for security operation teams.

Thus, providing information security specialists with automated tools is essential. It lets them focus their skills in areas where theyre most needed. The mundane everyday tasks take up so much of technicians time.

But automation tools are capable of handling them. It frees time for more valuable tasks that need a human touch. For instance, threat hunting and attribution.

Considerable increase in risk

The world has grown to incorporate technology into almost every facet of daily life and with that comes a considerable increase in risk. Therefore, machine learning and artificial intelligence have become an indispensable part of cybersecurity.

They fulfil a vital role that human labor simply cant. Automation is the answer. It can help cybersecurity specialists to tackle the sheer number of cyberthreats in corporate and personal applications.

Promoted

You might also like

Continue reading here:
How Automation and Artificial Intelligence Can Boost Cybersecurity - Robotics and Automation News

M’sian courts to go digital and adopt artificial intelligence initiatives – The Star Online

KUALA LUMPUR (Bernama): The country's courts are not only going digital but are also adopting artificial intelligence (AI) initiatives to ensure easy access to justice.

Minister in the Prime Minister's Department Datuk Liew Vui Keong said the government was already pursuing an array of AI initiatives in digitalising the courts.

This includes the introduction of e-bail and e-review that seek to reduce the need for lawyers and litigants to physically appear in court, saving time and costs for all parties, and digital voice to text court recording transcripts and digitally secured evidence.

"Through 2020, the government will continue to pursue and introduce additional AI initiatives to digitalise the courts and secure easy access to justice for all, he said in a statement here on Monday (Jan 20).

"The legal profession must embrace digitalisation, in which the Sabah and Sarawak Judiciary have (sic) led an exemplary path for legal practitioners across Malaysia to follow.

"I am delighted to hear the judiciarys support of the governments efforts to digitalise the courts through use of AI and information technology (IT). Support from the nations top judges was crucial. I therefore wish to record the governments sincerest appreciation for the tremendous support of the Chief Justice and Chief Judge of Sabah and Sarawak for these initiatives, he said.

These digital initiatives would not only facilitate easy access to justice by removing the necessity for the physical presence of parties in court but would also be environmentally friendly as they seek to reduce the usage of paper and carbon footprints incurred in travel, Liew said.- Bernama

Follow this link:
M'sian courts to go digital and adopt artificial intelligence initiatives - The Star Online

Creative storytelling with subtitles: Is artificial intelligence up for the task – ETBrandEquity.com

By Jyothi NayakTo err is human but just how true is this in the case of subtitling and captioning?

Recently a friend of mine asked me why we dont use automatic subtitling tools. Little did she know how excited I was when I heard about these tools a couple of years ago! After all, wouldnt it be wonderful to get machines to do all the hard work while we humans multi-task?

Lets take a step back and use a real-life scenario to analyze this. Platforms like YouTube have for long offered automatic captions for videos, but they are notorious for delivering sentences studded with nonsensical or occasionally obscene phrases. For hearing impaired viewers, however, this is no laughing matter, as they often depend on subtitles to decipher spoken words within a video. To address this issue, social media campaigns like #NoMoreCRAPtions have emerged which focus on ditching automatic captions.

This article is all about how subtitling is becoming increasingly relevant today, why its imperative, and what role technology can play in the evolving industry landscape. A recent study in the UK showed that more than 63% of Gen Z, who are digital natives, end up using subtitles as it not only helps them watch content on-the-move, but also aids in better comprehension.

Recent experiments in India and a few other developing countries have proved that Same Language Subtitles (SLS) have improved reading literacy. SLS causes automatic, inescapable reading engagement even among weak readers, and over a period of time has a bigger impact than conventional print media. Even developed countries plan to make SLS a default option for childrens content, in order to help young viewers develop reading skills in their early years.

As the boom in the subtitling industry fuels new business opportunities, large volumes and tight deadlines are making content creators look towards AI-based solutions. Like most other industries, AI has penetrated the translation and localization space and unlocked exciting possibilities. Today, there are several AI-based solutions that not only understand spoken words and convert them to text, but also translate them to a target language.

But the million-dollar question is are these machine-generated results as good as human translation? No, not yet! While AI can assist in the overall process of subtitling, actual translation by humans is far more impactful for local audiences, as such translation is creatively generated by native speakers of that language.

AI tools, in my view, still have several limitations. When working on genres like mythology or content with considerable background noise, heavy accents and high context content (like sarcasm or humor), the use of AI tools becomes challenging and the results are hard to work with. Within text translations as well, complex sentences can result in gibberish. For example, when translating from Hindi to English, an experienced translator would translate the reference of romantic Indian duo Laila Majnu to Romeo and Juliet something a machine would be able to do only after considerable learning. Creativity plays an intrinsic part in translating content and generating impactful subtitles.

When it comes to subtitling, the context is as important as the content. While words like mom/mother can be used interchangeably, the usage of mother is more appropriate in the context of a religious mention, which the machine will not be able to decipher automatically. Similarly, there are many common idioms and culture sensitive languages (Arabic for instance) which, when translated literally, yield hilarious and sometimes offensive results! AI tools tend to struggle with unclear contexts, new slangs and specialized subjects that require a lot of research.

So, does it mean the world of subtitling will remain human-driven even with the advent of AI? It certainly will not, as machines start learning the nuances and growing intelligence. There are many areas where automation can help reduce manual effort and increase speed right away. Examples include time-code shifting, workflows for Quality Check (QC) and auto check for compliance issues (usage of restricted words etc.) which can creep in through human errors. The good news is that theres no need to follow an all-or-nothing approach. You can choose a hybrid workflow where machine transcription takes place first, and QC is performed on this output by native translators, who correct all mistakes (and dont just laugh at them!). These corrections should ideally be fed back to the machine, so that it continues learning and eventually generates better quality subtitles. It also helps to use advanced, end-to-end AI tools that not only create transcripts, but also sync these to the prescribed number of words per second/minute, as well as to the shot boundary. Such tools deliver subtitles that are far more accurate.

Another factor to consider is that since most off-the-shelf subtitling tools have several limitations, vendors who deal in large volumes can look at building their own machine learning tools that are trained with past data to fit a particular genre/style of subtitling. This can help you generate high quality results, suited to your specific needs. Alternately, you could even consider using specialized AI tools which go a step further by using the output of multiple best-in-class engines and smartly extract the best from all of these to deliver better results.

As you can see, there is a lot of potential for automating the subtitling process, its just not completely foolproof yet! For now, leveraging an optimal mix of human talent and cutting-edge technology seems to be the best answer. AI-led automation augmented with the creativity of native speakers is the best way to meet the need for speed and volume that the subtitling industry demands today. Getting this blend right is the key for delivering multi-platform, multi-language content to worldwide audiences and increasing global market share.

-The author is SVP global localization, Prime Focus Technologies. Views expressed are personal.

Continued here:
Creative storytelling with subtitles: Is artificial intelligence up for the task - ETBrandEquity.com

The Role of Data Processing Organizations in Artificial Intelligence – Toolbox

As the use of personal computers (PCs) became more and more widespread and now the proliferation of cloud and smart devices, numerous battles over turf broke out. These involved such issues as:

1. Which part of the organization controls the selection and acquisition of these devices?2. What procedures must be followed to control access to and the modification of corporate data bases?3. How should these devices and their software be networked together?4. Who is responsible for developing or acquiring new software?

Data processing and management information system (MIS) groups have found it necessary to modify some of their established procedures to deal with the challenges of PC technology. The intent of this modification is to support distributed processing on a network of small computers while retaining the overall responsibility for ensuring that the organization's corporate resources are used most effectively. As Al technology is more widely used, what will be the change in the role of these data processing and MIS groups? Will AI become just another part of data processing?

Numerous trade-offs are possible for assigning responsibilities for developing or utilizing Al systems. Should the existing MIS group supervise the development of information systems, or should a new in-house Al group take over that responsibility? Factors to be considered include:

1. The level of interaction needed between these systems and existing corporate data bases2. Familiarity with the organization's needs, procedures and existing data-processing systems3. Cost of equipping, training, and motivating a specialized Al staff4. Built-in NIH biases ("That's not our idea, just do it the same way we always have.")5. Attitudes towards working closely with "nonprofessional" or "hands-on" experts such as those on the factory floor or in customer service6. Requirement for new specialties7. Distinctions between development of systems intended to improve internal operations and development of new products or services8. The amount of EDP resources required to develop or run an AI application program

The IT groups certainly have had extensive experience in interfacing with many elements in the organization. However, they have not always been successful in completely understanding the needs of users or the methods used in accomplishing specific tasks. Although they may be familiar with computer technology, some MIS personnel are not suited for the level of innovative development required with the current state of artificial intelligence art. Conversely, they may have become by reason of previous experience much more realistic about scheduling and cost requirements. Finally, motivations and priorities may favour the establishment of a specialized AI group.

One person spent several hours with the members of a consulting group that specialized in the design of large database systems. The purpose of the meeting was to explore the commonalities and differences between AI and "conventional" data-base system practice. There were two interesting conclusions from the meeting: First, that the Al community was just beginning to learn what the data-processing community had learned long ago, and second, that the major difference was one of focus. The designer of a data-base system must ruthlessly focus on commonality, suppressing any individual differences. The designer of an Al system, on the other hand, gives the greatest emphasis on the individual and his or her needs.

As distributed computing power becomes more ubiquitous, it may be possible to embed individual support systems within the common whole. But there is also an opportunity for building distributed support systems that span the globe much more easily and can concentrate its support to areas of need when and where the need occurs

View original post here:
The Role of Data Processing Organizations in Artificial Intelligence - Toolbox

Clearview AI: The company that might end privacy as we know it – ETtech.com

You take a picture of a person, upload it and get to see public photos of that person along with links to where those photos appeared. By Kashmir Hill

Until recently, Hoan Ton-Thats greatest hit was an app that let people put Donald Trumps distinctive yellow hair on their own photos.

Then Ton-That did something momentous: He invented a tool that could end your ability to walk down the street anonymously and provided it to hundreds of law enforcement agencies.

His tiny company, Clearview AI, devised a groundbreaking facial recognition app. You take a picture of a person, upload it and get to see public photos of that person along with links to where those photos appeared.

Federal and state law enforcement officers said that while they had only limited knowledge of how Clearview works and who is behind it, they had used its app to help solve shoplifting, identity theft, credit card fraud, murder and child sexual exploitation cases.

Until now, technology that readily identifies everyone based on their faces has been taboo because of its radical erosion of privacy.

But without public scrutiny, more than 600 law enforcement agencies have started using Clearview in the past year, according to the company, which declined to provide a list. The computer code underlying its app, analyzed by The New York Times, includes programming language to pair it with augmented reality glasses; users would potentially be able to identify every person they saw.

Clearview has also licensed the app to at least a handful of companies for security purposes.

The weaponization possibilities of this are endless, said Eric Goldman, co-director of the High Tech Law Institute at Santa Clara University. Imagine a rogue law enforcement officer who wants to stalk potential romantic partners, or a foreign government using this to dig up secrets about people to blackmail them or throw them in jail.

Clearview has shrouded itself in secrecy, avoiding debate about its boundary-pushing technology. When I began looking into the company in November, its website was a bare page showing a nonexistent Manhattan address as its place of business. The companys one employee listed on LinkedIn, a sales manager named John Good, turned out to be Ton-That, using a fake name. For a month, people affiliated with the company would not return my emails or phone calls.

While the company was dodging me, it was also monitoring me. At my request, a number of police officers had run my photo through the Clearview app. They soon received phone calls from company representatives asking if they were talking to the media a sign that Clearview has the ability and, in this case, the appetite to monitor whom law enforcement is searching for.

Facial recognition technology has always been controversial. Clearviews app carries extra risks because law enforcement agencies are uploading sensitive photos to the servers of a company whose ability to protect its data is untested.

The company eventually started answering my questions, saying that its earlier silence was typical of an early-stage startup in stealth mode. Ton-That acknowledged designing a prototype for use with augmented reality glasses but said the company had no plans to release it. And he said my photo had rung alarm bells because the app flags possible anomalous search behavior in order to prevent users from conducting what it deemed inappropriate searches.

In addition to Ton-That, Clearview was founded by Richard Schwartz who was an aide to Rudy Giuliani when he was mayor of New York and backed financially by Peter Thiel, a venture capitalist behind Facebook and Palantir.

Another early investor is a small firm called Kirenaga Partners. Its founder, David Scalzo, dismissed concerns about Clearview making the internet searchable by face, saying its a valuable crime-solving tool.

Ive come to the conclusion that because information constantly increases, theres never going to be privacy, Scalzo said. Laws have to determine whats legal, but you cant ban technology.

Addicted to AITon-That, 31, grew up a long way from Silicon Valley, in his native Australia. In 2007, he dropped out of college and moved to San Francisco. The iPhone had just arrived, and his goal was to get in early on what he expected would be a vibrant market for social media apps.

In 2015, he spun up Trump Hair, which added Trumps distinctive coif to people in a photo, and a photo-sharing program. Both fizzled.

Ton-That moved to New York in 2016. He started reading academic papers on artificial intelligence, image recognition and machine learning.

Schwartz and Ton-That met in 2016 at a book event at the Manhattan Institute, a conservative think tank. Schwartz, now 61, had amassed an impressive Rolodex working for Giuliani in the 1990s. The two soon decided to go into the facial recognition business together: Ton-That would build the app, and Schwartz would use his contacts to drum up commercial interest.

Police departments have had access to facial recognition tools for almost 20 years, but they have historically been limited to searching government-provided images, such as mug shots and drivers license photos.

Ton-That wanted to go way beyond that. He began in 2016 by recruiting a couple of engineers. One helped design a program that can automatically collect images of peoples faces from across the internet, such as employment sites and social networks. Representatives of those companies said their policies prohibit such scraping.

Another engineer was hired to perfect a facial recognition algorithm that was derived from academic papers. The result: a system that uses what Ton-That described as a state-of-the-art neural net to convert all the images into mathematical formulas, or vectors, based on facial geometry like how far apart a persons eyes are.

Clearview created a vast directory that clustered all the photos with similar vectors into neighborhoods. When a user uploads a photo of a face into Clearviews system, it converts the face into a vector and then shows all the scraped photos stored in that vectors neighborhood along with the links to the sites from which those images came.

Clearview remains tiny, having raised $7 million from investors, according to Pitchbook, a website that tracks investments in startups. The company declined to confirm the amount.

Going Viral With Law EnforcementIn February, the Indiana State Police started experimenting with Clearview. They solved a case within 20 minutes of using the app. Two men had gotten into a fight in a park, and it ended when one shot the other in the stomach. A bystander recorded the crime on a phone, so police had a still of the gunmans face to run through Clearviews app.

They immediately got a match: The man appeared in a video that someone had posted on social media, and his name was included in a caption on the video. He did not have a drivers license and hadnt been arrested as an adult, so he wasnt in government databases, said Chuck Cohen, an Indiana State Police captain at the time.

The man was arrested and charged; Cohen said he probably wouldnt have been identified without the ability to search social media for his face. The Indiana State Police became Clearviews first paying customer, according to the company. (Police declined to comment beyond saying that they tested Clearviews app.)

The companys most effective sales technique was offering 30-day free trials to officers. Ton-That finally had his viral hit.

Federal law enforcement, including the FBI and the Department of Homeland Security, are trying it, as are Canadian law enforcement authorities, according to the company and government officials.

Ton-That said the tool does not always work. Most of the photos in Clearviews database are taken at eye level. Much of the material that police upload is from surveillance cameras mounted on ceilings or high on walls.

Despite that, the company said, its tool finds matches up to 75% of the time.

One reason that Clearview is catching on is that its service is unique. Thats because Facebook and other social media sites prohibit people from scraping users images; Clearview is violating the sites terms of service.

Some law enforcement officials said they didnt realize the photos they uploaded were being sent to and stored on Clearviews servers. Clearview tries to preempt concerns with an FAQ document given to would-be clients that says its customer support employees wont look at the photos that police upload.

Clearview also hired Paul Clement, a U.S. solicitor general under President George W. Bush, to assuage concerns about the apps legality.

In an August memo that Clearview provided to potential customers, including the Atlanta Police Department and the Pinellas County Sheriffs Office in Florida, Clement said law enforcement agencies do not violate the federal Constitution or relevant existing state biometric and privacy laws when using Clearview for its intended purpose.

Clement, now a partner at Kirkland & Ellis, wrote that authorities dont have to tell defendants that they were identified via Clearview as long as it isnt the sole basis for getting a warrant to arrest them. Clement did not respond to multiple requests for comment.

The memo appeared to be effective; the Atlanta police and Pinellas County Sheriffs Office soon started using Clearview.

Woodrow Hartzog, a professor of law and computer science at Northeastern University in Boston, sees Clearview as the latest proof that facial recognition should be banned in the United States.

Weve relied on industry efforts to self-police and not embrace such a risky technology, but now those dams are breaking because there is so much money on the table, Hartzog said. I dont see a future where we harness the benefits of face recognition technology without the crippling abuse of the surveillance that comes with it. The only way to stop it is to ban it.

Excerpt from:
Clearview AI: The company that might end privacy as we know it - ETtech.com

Artificial Intelligence – Scratch Wiki

Artificial Intelligence, commonly abbreviated as AI, has a popular connotation within Scratch relating to a computerized mind that consists entirely of programming code.

Its usage in Scratch, albeit somewhat misleading, is most common in projects in which a user can play a game against the computer.

Most projects that use AI use special techniques, such as using variables to store different values. Those values may be previous locations, user input, and so on. They help to calculate different actions that allow the computer to make a good challenge to the player and succeed in its task.

A practical and optimal AI system will use recursion[citation needed] to try to adapt to the circumstances itself. Given (for a game):

A recursive function to return the best move for a player given a board and which player can be written under the following logic:

See the article on game trees for more on recursive functions and their use in constructing AI.

There is also another class of AI that depends solely upon only one of the factors. Such AI are a lot simpler and, in many cases, effective. However, they have not fulfilled the true requirements of an AI. For example, in the project Agent White, the AI moves along a given path and only tries to shoot at you. In this AI, only the user's position matters to the AI; it will rotate so that its gun turns towards the user. In the project Broomsticks, the AI only changes its position with respect to the ball.

AI which can take external stimulus and decide upon the best way to use it is called a learning AI, or an AI that uses something called machine learning. Neural networks are also commonly used for learning AIs. A learning AI is able to learn off of its present and past experiences. One popular way of making a learning AI is by using a neural network. Another is by making a list of things and creating a list of things for every reply (which can be done in Scratch, although with some difficulty as 2D arrays are not easily implemented).

Another type of AI is used in a remix of Agent White found here. In this remix, the AI picks a random path and follows it. It uses Math and future x and y positions based on the current position of a character which you control. Then it slowly moves toward that new position until it either reaches its destination or hits a wall. In this case, instead of Artificial Intelligence, it is more of Artificial Random because it never uses intelligence other than running into walls.

One of the biggest limitations AI has been facing is speed. Scratch is a rather slow programming language; hence most AIs on Scratch are slow because their scripts are too long.

Complications also have been a major problem for AI as all AI programs are very large and complicated, thus the scripts may become long and too laggy to make without crashing Scratch. For example, a simple game of Tic-Tac-Toe with AI will have a script running into multiple pages due to many conditions in if blocks, and sometimes an attempt to speed it up will be made by making it Single Frame.

The complicated script also makes remixing a problem. Because of all this, most AI projects have no improvements, causing the AI to remain glitchy. AIs may make mistakes that are easily avoidable by users, and most mistakes like these are hilariously known as artificial stupidity.

These projects have been using AI in the truest sense possible practically:

Continue reading here:
Artificial Intelligence - Scratch Wiki

High-Technology Discovered in Classical Mythology Reveals …

For the last 70 years science fiction writers and Hollywood movie directors have explored the place of robots and artificial intelligence (AI) in the future of humankind. But automated technologies with greater than human intelligence were first conceptualized in the imaginations of people in ancient societies and were woven into their folkloric systems, according to a highly-original new book.

TitledGods and Robots: Myths, Machines, and Ancient Dreams of Technology , the author, Dr Adrienne Mayor of Stanford University is, according to the university website, an independent folklorist/historian of science investigating natural knowledge contained in pre-scientific myths and oral traditions. In a nutshell, Dr Mayor can be described as a force of mythological and folkloric understanding and her previous works have been featured on NPR, BBC, History Channel, Smithsonian and National Geographic. Now, this new book offers readers comparisons between the legendary figures of ancient myths and the AI driven robots of today which are building tomorrows world.

Vulcan (Hephaestus). Engraving by E. Jeaurat, 1716. ( CC BY 4.0 )

While the corridors of universities and academic institutions are teaming with thousands of professors skilled with powerful oratory abilities in classic teaching environments, Dr Mayor has a quality that must be a thing of great envy with her peer group, that rare skill of original storytelling in written academic form. Not only does her book carefully analyze classic myths in an easy to digest way for the lay-reader, but all the way her methodology and observational stances adhere to the scientific method of investigation. However, where so many academic writers deliver dry base facts and figures with no context in the real world, Dr Mayor subtly prompts readers to project the archetypal messages in timeworn stories into our modern zeitgeist, as we build a new world with AI at the fore.

Medeia and Talus by Sybil Tawse. ( Public Domain )

According to a report about the new book in The Daily Mail , Dr Mayor said ancient people envisioned many of the technology trends we grapple with today including killer androids, driverless technology, GPS and AI-powered helper robots. Illustrating her hypothesis, the creations of Hephaestus, the god of metalworking and an invention in Homers Iliad , were predictions of the rise of humanoid robots. An article about Mayors research in Greek Reporter said AI-powered helper robots andkiller androids, according to Dr Mayor, appear in tales about Jason and the Argonauts, Medea, Daedalus and Prometheus and also the 'bronze killer-robot' Taloswho guarded the island of Crete. Furthermore, the legendary Pandora, who Dr Mayor describes as a 'wicked AI fembot like the replicant in the blockbuster movie Blade Runner, had been programmed to release eternal suffering upon humanity and Though the Greeks did not know how technology would work, they could foreshadow its rise in society, said Mayor.

Pandora trying to close the box that she had opened out of curiosity. ( Public Domain )

A book review on Science Mag , by Sarah Olson, softly criticized Dr Mayor saying Despite her extensive knowledge of ancient mythology, Mayor does little to demonstrate an understanding of modern AI, neural networks, and machine learning; the chatbots are among only a handful of examples of modern technology she explores. While Olsons observation is valid, looking at it another way, isnt this actually a veiled credit to the author? So often modern authors, especially scientific writers, speculate into complicated fields with their core understanding which dilutes the heart of their research. Contrary to this, it would appear Dr Mayor realized her speculations into future technologies including AI would only ever be speculations and rather than opening herself up to the scathing reviews of silicone valley tech geeks, she loyally focused her research on her specialist subject, which is quite clearly classical mythology.

The Science Mag article also criticized Mayor for not having added a few sentences to explain the difference between, say, machine learning and AI, which the reviewer claims makes it difficult for readers to identify the books intended audience. Again, this is possibly another credit to the author. Heres why. In our hyper-commercialized world seldom do authors write honest books simply because they believe a story needs to be written. Because Dr Mayor's book was not written for a defined audience it will be remembered as a brave scientific sentinel that will undoubtedly find or make its readership, organically, over time.

When you read this book, the ultimate takeaway is that the observations are un-skewed and non-sensationalized, neither are they dumbed down to fit into a publishers or predetermined audience. And when a book delivers more suggestions and questions than answers, like this one, it immediately becomes a refreshingly non-egotistical trip through classic mythology. What is more, the author has left sufficient space for readers to indulge in their own ideas and conclusions based on their understanding of technology. Thus, what has actually been published is more than a book, and the work marks a new generation of psychologically interactive mythological learning. You finish the story Dr Mayor began.

An article about Dr Mayors book on News said the author is urging leading tech bosses to closely analyze the stories and characters of Greek mythology as we close in on a future dominated by automated technologies. Gods and Robots offers optimistic insights with cautionary twists while warning of the potential risks of uncontrolled future technologies, and it is clear that Dr Mayor believes herself that AI might one day deliver the mythological worlds our ancient ancestors imagined and immortalized in their folk stories.

Top image: Was artificial intelligence predicted by the Greeks? Source: pict rider via Fotolia

By Ashley Cowie

The rest is here:
High-Technology Discovered in Classical Mythology Reveals ...

Its going to be a Happy New Year for Artificial …

MUMBAI | NEW DELHI: Artificial intelligence (AI) is the buzz in the jobs bazaar as machine learning and the Internet of Things (IoT) increasingly influence business strategies and analytics. Human resource and search experts estimate a 50-60% higher demand for AI and robotics professionals in 2018 even as machines take over repetitive manual work.

Machines are taking over repetitive tasks. Robotics, AI, big data and analytics will be competencies that will be in great demand, said Shakun Khanna, senior director at Oracle for the Asia-Pacific region.

Organisations are being pushed to become even more efficient as jobs turn predictable, said Rishabh Kaul, cofounder of recruitment startup Belong, which helps clients search for and hire AI professionals. There is a significant increase in the adoption of AI and automation across enterprises, leading to a skyrocketing of demand for professionals in these fields, he said.

Jobs in the IoT ecosystem, have grown fourfold in the last three years, according to estimates by Belong. These are related to engagement technologies and data capture among other areas. Demand for professionals in the realm of data analysis, including data scientists, have grown by almost 76% in the past few years in AI.

The demand is at the entry level as well as middle to senior ranks across sectors such as business, financial services and insurance (BFSI), ecommerce, startups, business process outsourcing (BPO), information technology (IT), pharmaceuticals, healthcare and retail. Robotics is required by process-oriented companies for a better customer experience. It helps in cutting down cost and improves efficiency, said Thammaiah BN, managing director, Kelly Services India.

AI is helping companies to be in spaces so far not thought of. Organisations can accomplish new things, new products and services through AI.

Companies want to mine the data they have accumulated over the years, said Sinosh Panicker, partner, Hunt Partners. AI helps them predict and position their products better and push out new things, he said. However, theres an acute demand-supply mismatch for AI talent across industries, experts said. Candidates for AI roles related to natural language processing (NLP), deep learning, and machine learning are thin on the ground, according to the Belong Talent Supply Index. The ratio of the number of people to jobs in deep learning is 0.53, while for machine learning its 0.63 and for NLP its 0.71.

Only 4% of AI professionals in India have worked on cutting-edge technologies such as deep learning and neural networks, the key ingredients in building advanced AI-related solutions, said Kaul.

Roles in data science and data engineering (which are different facets of the AI family of skills) are at the intersection of math, statistics and programming, he said. This isnt typically taught at Indian colleges as part of formal learning.

A few academic institutions such as the Indian Institutes of Technology (IITs) in Kharagpur and Kanpur, the Indian Institute of Information Technology (IIIT) in Hyderabad and the Indian Institute of Science (IISc) in Bengaluru have specialised disciplines or centres for artificial intelligence and machine learning. In fact, according to our internal research, less than 2% of professionals who call themselves data scientists or data engineers have a PhD in AI-related technologies, said Kaul.

Such is the need for talent that it is prompting top business schools, including the Indian Institutes of Management (IIMs), to include AI and machine learning in their curriculum and expose students to the full ecosystem of IoT. The IIMs in Bangalore and Kozhikode and premier B-Schools like the SP Jain Institute of Management & Research (SPJIMR) are offering courses on AI, robotics and IoT that can be connected to business strategy to enhance performance, output and customer experience.

Some are learning skills through various other courses, including online ones. People who are keeping themselves abreast with new age technologies and have the right set of required skills under the same are in high demand, said ABC Consultants director Ratna Gupta.

View original post here:
Its going to be a Happy New Year for Artificial ...