8 Examples of Artificial Intelligence in our Everyday Lives

Main Examples of Artificial Intelligence Takeaways:

The words artificial intelligence may seem like a far-off concept that has nothing to do with us. But the truth is that we encounter several examples of artificial intelligence in our daily lives.

From Netflixs movie recommendation to Amazons Alexa, we now rely on various AI models without knowing it. In this post, well consider eight examples of how were already using artificial intelligence.

Artificial intelligence is an expansive branch of computer science that focuses on building smart machines. Thanks to AI, these machines can learn from experience, adjust to new inputs, and perform human-like tasks. For example, chess-playing computers and self-driving cars rely heavily on natural language processing and deep learning to function.

American computer scientist John McCarthy coined the term artificial intelligence back in 1956. At the time, McCarthy only created the term to distinguish the AI field from cybernetics.

However, AI is more popular than ever today due to:

Hollywood movies tend to depict artificial intelligence as a villainous technology that is destined to take over the world.

One example is the artificial superintelligence system, Skynet, from the film franchise Terminator. Theres also VIKI, an AI supercomputer from the movie I, Robot, who deemed that humans cant be trusted with their own survival.

Holywood has also depicted AI as superintelligent robots, like in movies I Am Mother and Ex Machina.

However, the current AI technologies are not as sinister or quite as advanced. With that said, these depictions raise an essential question:

No, not exactly. Artificial intelligence and robotics are two entirely separate fields. Robotics is a technology branch that deals with physical robots programmable machines designed to perform a series of tasks. On the other hand, AI involves developing programs to complete tasks that would otherwise require human intelligence. However, the two fields can overlap to create artificially intelligent robots.

Most robots are not artificially intelligent. For example, industrial robots are usually programmed to perform the same repetitive tasks. As a result, they typically have limited functionality.

However, introducing an AI algorithm to an industrial robot can enable it to perform more complex tasks. For instance, it can use a path-finding algorithm to navigate around a warehouse autonomously.

To understand how thats possible, we must address another question:

The four artificial intelligence types are reactive machines, limited memory, Theory of Mind, and self-aware. These AI types exist as a type of hierarchy, where the simplest level requires basic functioning, and the most advanced level is well, all-knowing. Other subsets of AI include big data, machine learning, and natural language processing.

The simplest types of AI systems are reactive. They can neither learn from experiences nor form memories. Instead, reactive machines react to some inputs with some output.

Examples of artificial intelligence machines in this category include Googles AlphaGo and IBMs chess-playing supercomputer, Deep Blue.

Deep Blue can identify chess pieces and knows how each of them moves. While the machine can choose the most optimal move from several possibilities, it cant predict the opponents moves.

A reactive machine doesnt rely on an internal concept of the world. Instead, it perceives the world directly and acts on what it sees.

Limited memory refers to an AIs ability to store previous data and use it to make better predictions. In other words, these types of artificial intelligence can look at the recent past to make immediate decisions.

Note that limited memory is required to create every machine learning model. However, the model can get deployed as a reactive machine type.

The three significant examples of artificial intelligence in this category are:

Self-driving cars are limited memory AI that makes immediate decisions using data from the recent past.

For example, self-driving cars use sensors to identify steep roads, traffic signals, and civilians crossing the streets. The vehicles can then use this information to make better driving decisions and avoid accidents.

In Psychology, theory of mind refers to the ability to attribute mental state beliefs, intent, desires, emotion, knowledge to oneself and others. Its the fundamental reason we can have social interactions.

Unfortunately, were yet to reach the Theory of Mind artificial intelligence type. Although voice assistants exhibit such capabilities, its still a one-way relationship.

For example, you could yell angrily at Google Maps to take you in another direction. However, itll neither show concern for your distress nor offer emotional support. Instead, the map application will return the same traffic report and ETA.

An AI system with Theory of Mind would understand that humans have thoughts, feelings, and expectations for how to be treated. That way, it can adjust its response accordingly.

The final step of AI development is to build self-aware machines that can form representations of themselves. Its an extension and advancement of the Theory of Mind AI.

A self-aware machine has human-level consciousness, with the ability to think, desire, and understand its feelings. At the moment, these types of artificial intelligence only exist in movies and comic book pages. Self-aware machines do not exist.

Although self-aware machines are still decades away, several artificial intelligence examples already exist in our everyday lives.

Several examples of artificial intelligence impact our lives today. These include FaceID on iPhones, the search algorithm on Google, and the recommendation algorithm on Netflix. Youll also find other examples of how AI is in use today on social media, digital assistants like Alexa, and ride-hailing apps such as Uber.

Virtual filters on Snapchat and the FaceID unlock on iPhones are two examples of AI applications today. While the former uses face detection technology to identify any face, the latter relies on face recognition.

So, how does it work?

The TrueDepth camera on the Apple devices projects over 30,000 invisible dots to create a depth map of your face. It also captures an infrared image of the users face.

After that, a machine learning algorithm compares the scan of your face with what a previously enrolled facial data. That way, it can determine whether to unlock the device or not.

According to Apple, FaceID automatically adapts to changes in the users appearance. These include wearing cosmetic makeup, growing facial hair, or wearing hats, glasses, or contact lens.

The Cupertino-based tech giant also stated that the chance of fooling FaceID is one in a million.

Several text editors today rely on artificial intelligence to provide the best writing experience.

For example, document editors use an NLP algorithm to identify incorrect grammar usage and suggest corrections. Besides auto-correction, some writing tools also provide readability and plagiarism grades.

However, editors such as INK took AI usage a bit further to provide specialized functions. It uses artificial intelligence to offer smart web content optimization recommendations.

Just recently, INK has released a study showing how its AI-powered writing platform can improve content relevance and help drive traffic to sites. You can read their full study here.

Social media platforms such as Facebook, Twitter, and Instagram rely heavily on artificial intelligence for various tasks.

Currently, these social media platforms use AI to personalize what you see on your feeds. The model identifies users interests and recommends similar content to keep them engaged.

Also, researchers trained AI models to recognize hate keywords, phrases, and symbols in different languages. That way, the algorithm can swiftly take down social media posts that contain hate speech.

Other examples of artificial intelligence in social media include:

Plans for social media platform involve using artificial intelligence to identify mental health problems. For example, an algorithm could analyze content posted and consumed to detect suicidal tendencies.

Getting queries directly from a customer representative can be very time-consuming. Thats where artificial intelligence comes in.

Computer scientists train chat robots or chatbots to impersonate the conversational styles of customer representatives using natural language processing.

Chatbots can now answer questions that require a detailed response in place of a specific yes or no answer. Whats more, the bots can learn from previous bad ratings to ensure maximum customer satisfaction.

As a result, machines now perform basic tasks such as answering FAQs or taking and tracking orders.

Media streaming platforms such as Netflix, YouTube, and Spotify rely on a smart recommendation system thats powered by AI.

First, the system collects data on users interests and behavior using various online activities. After that, machine learning and deep learning algorithms analyze the data to predict preferences.

Thats why youll always find movies that youre likely to watch on Netflixs recommendation. And you wont have to search any further.

Search algorithms ensure that the top results on the search engine result page (SERP) have the answers to our queries. But how does this happen?

Search companies usually include some type of quality control algorithm to recognize high-quality content. It then provides a list of search results that best answer the query and offers the best user experience.

Since search engines are made entirely of codes, they rely on natural language processing (NLP) technology to understand queries.

Last year, Google announced Bidirectional Encoder Representations from Transformers (BERT), an NLP pre-training technique. Now, the technology powers almost all English-based query on Google Search.

In October 2011, Apples Siri became the first digital assistant to be standard on a smartphone. However, voice assistants have come a long way since then.

Today, Google Assistant incorporates advanced NLP and ML to become well-versed in human language. Not only does it understand complex commands, but it also provides satisfactory outputs.

Also, digital assistants now have adaptive capabilities for analyzing user preferences, habits, and schedules. That way, they can organize and plan actions such as reminders, prompts, and schedules.

Various smart home devices now use AI applications to conserve energy.

For example, smart thermostats such as Nest use our daily habits and heating/cooling preferences to adjust home temperatures. Likewise, smart refrigerators can create shopping lists based on whats absent on the fridges shelves.

The way we use artificial intelligence at home is still evolving. More AI solutions now analyze human behavior and function accordingly.

We encounter AI daily, whether youre surfing the internet or listening to music on Spotify.

Other examples of artificial intelligence are visible in smart email apps, e-commerce, smart keyboard apps, as well as banking and finance. Artificial intelligence now plays a significant role in our decisions and lifestyle.

The media may have portrayed AI as a competition to human workers or a concept thatll eventually take over the world. But thats not the case.

Instead, artificial intelligence is helping humans become more productive and helping us live a better life.

Read more from the original source:
8 Examples of Artificial Intelligence in our Everyday Lives

Former Pentagon official says China has won artificial intelligence battle | TheHill – The Hill

The Pentagon's former software chief resigned and said that China is headed toward global dominance in artificial intelligence due to the relatively slow pace of innovationin the United States.

"We have no competing fighting chance against China in 15 to 20 years. Right now, its already a done deal; it is already over in my opinion," the Pentagon's former software chief, Nick Chaillan, told the Financial Times, adding that some of the U.S.'s cyber defense systems wereat "kindergarten level."

Chaillanannounced his resignation last month as an act of protest against the United States' slow pace of tech development.Chaillan saidAmerica's failure toaggressively pursue AI capacity was putting the nation at risk, according toReuters.

In the next decade, Western intelligence reportspredictChinawill dominate with many emerging technologies like AI, synthetic biology and genetics, Reutersreported.

Chaillan also attributed the sluggish pace to companies like Google hesitating to work with the government on AI andongoing debates about AI ethics in the U.S., while China pushes forward without consideration for the potential ethical consequences.

"Google is proud to work with the U.S. government, and we have many projects underway today, including with the Department of Defense, Department of Energy, and the NIH," a Google Cloud spokesperson said in a statement to The Hill. "We are committed to continuing to partner with the U.S. government, including the military, both on specific projects and on broader policy around AI that are consistent with our principles."

Meanwhile, Secretary of Defense Lloyd J. Austin III in July recognized that "China is our pacing challenge" when it comes to AI development.

"Were going to compete to win, but were going to do it the right way,"Austin said."Were not going to cut corners on safety, security, or ethics."

Ina LinkedIn post announcing his departure on Sept. 2, Chaillan insisted that the U.S. could not "afford to be behind."

"If the US cant match the booming, hardworking population in China, then we have to win by being smarter, more efficient, and forward-leaning through agility, rapid prototyping and innovation. We have to be ahead and lead."

Chaillan was also critical of the Department of Defense and its decisions to put people with limitedIT experience in leadership roles over software programs.

"The DoD should stop pretending they want industry folks to come and help if they are not going to let them do the work. While we wasted time in bureaucracy, our adversaries moved further ahead," Chaillan said.

"I will always feel some guilt or regret in leaving. I have this sinking feeling that I am letting our warfighters, the teams, and my children down by not continuing to fight for a better outcome 20 years from now,"Chaillan added of his departure.

Updated at 2:28 p.m.

Link:
Former Pentagon official says China has won artificial intelligence battle | TheHill - The Hill

AI in Robotics: Robotics and Artificial Intelligence 2021 – Datamation

Artificial intelligence (AI) is driving the robotics market into various areas, including mobile robots on the factory floor, robots that can do a large number of tasks rather than being specialized on one, and robots that can stay in control of inventory levels as well as fetching orders for delivery.

Such advanced functionality has raised the complexity of robotics. Hence the need for AI.

Artificial intelligence provides the ability to monitor many parameters in real-time and make decisions. For example, in an inventory robot, the machine has to be able to know its own location, the location of all stock, know stock levels, work out the sequence to go and retrieve items for orders, know the location of other robots on the floor, be able to navigate around the site, know when a human is near and change course, take deliveries to shipping, keep track of everything, and more.

The mobile robot also has to interoperate with various shop floor systems, computer numerical control (CNC) equipment, and other industrial systems. AI helps all those disparate systems work together seamlessly by being able to process their various inputs in real-time and coordinate action.

The autonomous robotic market alone is worth around $103 billion this year, according to Rob Enderle, an analyst at Enderle Group. He predicts that it will more than double by 2025 to $210 billion.

It will only go vertical from there, Enderle said.

Thats only one portion of the market. Another hot area is robotic process automation (RPA). It, too, is being integrated with AI to deal with high-volume, repeatable tasks. By handing these tasks over to robots, labor costs are reduced, workflows can be streamlined, and assembly processes are accelerated. Software can be written, for example, to take care of routine queries, calculations, and record keeping.

Historically, two different teams were needed: one for robotics and another for factory automation. The robotics team consists of specialized technicians with their own programming language to deal with the complex kinematics of multi-axis robots. Factory automation engineers, on the other hand, use programmable logic controllers (PLCs) and shop floor systems that utilize different programming languages. But software is now on the market that brings these two worlds together.

Further, better software and more sophisticated hardware has opened the door to a whole new breed of robot. While basic models operate on two axes, the latest breed of robotic machine with AI is capable of movement on six axes. They can be programmed to either carry out one task, over and over with high accuracy and speed, or execute complex tasks, such as coating or machining intricate components.

See more: Artificial Intelligence Market

Hondas ASIMO has become something of a celebrity. This advanced humanoid robot has been programmed to walk like a human, maintain balance, and do backflips.

But now AI is being used to advance its capabilities with an eventual view toward autonomous motion.

The difficulty is no longer building the robot but training it to deal with unstructured environments, like roads, open areas, and building interiors, Enderle said.They are complex systems with massive numbers of actuators and sensors to move and perceive what is around them.

Sight Machine, the developer of a manufacturing data platform, has partnered with Nissan to use AI to perform anomaly detection on 300 robots working on an automated final assembly process.

This system provides predictions and root-cause analysis for downtime.

See more: Artificial Intelligence: Current and Future Trends

Siemens and AUTOParkit have formed a partnership to bring parking into the 21st century.

Using Siemens automation controls with AI, the AUTOParkit solution provides a safe valet service without the valet.

This fully automated parking solution can achieve 2:1 efficiency over a conventional parking approach, AUTOParkit says. It reduces parking-related fuel consumption by 83% and carbon emissions by 82%.

In such a complex system, specialized vehicle-specific hardware and software work together to provide smooth and seamless parking experience that is far faster than traditional parking. Siemens controls use AI to pull it all together.

Kawasaki has a large offering of robots that are primarily used in fixed installations. But now it is working on robotic mobility and that takes AI.

For stationary robots to work seamlessly with mobile robots, it is essential that they can exchange information accurately and without failure, said Samir Patel, senior director of robotics engineering, Kawasaki Robotics USA.

To meet such integration requirements, Kawasaki robot controllers offer numerous options, including EtherNet TCP/IP, EtherNet IP, EtherCat, PROFIBUS, PROFINET and DeviceNet. These options not only allow our robots to communicate with mobile robots, but also allow communication to supervisory servers, PLCs, vision systems, sensors, and other devices.

With so many data sources to communicate with and instantaneous response needed to provide operational efficiency and maintain safety, AI is needed.

Over time, each robot accumulates data, such as joint load, speed, temperature, and cycle count, which periodically gets transferred to the network server, Patel said. In turn. the server running an application, such as Kawasakis Trend Manager, can analyze the data for performance and failure prediction.

Sight Machine, in close cooperation with Komatsu, has developed a system that can rapidly analyze 500 million data points from 600 welding robots.

The AI-based system can provide early warning of potential downtime and other welding faults.

See more: Top Performing Artificial Intelligence Companies

Read this article:
AI in Robotics: Robotics and Artificial Intelligence 2021 - Datamation

The AI-Enabled Telco Takes Shape: Why Telcos Are Using Artificial Intelligence To Rollout Their 5G Services – Woburn Daily Times

BOSTON, Oct. 11, 2021 /PRNewswire/ --Over the next five years, Bain & Company expects 5G to enter the mainstream, gaining popularity through accelerated deployment by telcos, affordable handsets and other major uses for the technology. According to the firm's analysis,the adoption of 5G is expected to be fasterin its first seven years2018 to 2025thanthe adoption of4G in the seven yearsfollowingits marketdebut in 2009.

Bain & Company's research shows that the number of 5G connections worldwide will triple from less than 700 million today tomore than 2.1 billion by 2025. This strong momentum reflects heavy operator investment in 5G infrastructure, a gradual expansion of 5G use cases and a global hunger for data connectivitywhich has increasingly surged during the pandemic. Yet, despite this insurgence, many telcos still struggle to reap the full rewards that 5G has to offer. In Bain's new report, AI = ROI: How Artificial Intelligence Is (Already) Solving the 5G Equation, the firm explores how operators are using artificial intelligence to accrue a better return on investment (ROI) from 5G deployment.

"Artificial intelligence is already being used by leading telcos to gain a strategic advantage in 5G," said Herbert Blum, head of Bain & Company's Global Communications, Media & Entertainment practice, "But being AI-native requires more than an optimization of existing business processes or workflow overlays. It demands that the role of employees across all functions evolves in partnership with the technology as well."

Bain's new research shows how a telco that uses AI tools in its 5G rollout could develop a differentiated capability for putting the right infrastructure in the right place, with surgical precision and at dizzying scale. For instance, one major ROI challenge with 5Gstems fromthespectrum bandsthat the technology uses. 5G's higher-frequency signalsdo not travel as far, or penetrate buildings as well as the lower-frequency signals used by 4Grequiring operators to deployas many as 100 times the number of cellsused by4G for their 5G services. AI can help solve this engineering conundrum, and one of the sector's toughest challenges, by accelerating decisions from months and weeks to days and minutes, with a precision and scale that exceeds what is humanly possible.

"Even digitally native telcos are not immune to the complexities brought by 5G adoption, particularly if they still rely on a labor-intensive workflow," said Darryn Lowe, a leader in Bain & Company's Communications, Media and Entertainmentpractice. "In the coming years, winning telcos will be operators that use 5G, and other high-stakes business areas, as a proving ground for the deeper AI capabilities they'll need to gain to remain competitive."

Editor's Note:To arrange an interview, contact Katie Ware atkatie.ware@bain.comor +1 646 562 8102.

About Bain & Company

Bain & Company is a global consultancy that helps the world's most ambitious change makers define the future.

Across 63 offices in 38 countries, we work alongside our clients as one team with a shared ambition to achieve extraordinary results, outperform the competition, and redefine industries. We complement our tailored, integrated expertise with a vibrant ecosystem of digital innovators to deliver better, faster, and more enduring outcomes. Our 10-year commitment to invest more than $1 billion in pro bono services brings our talent, expertise, and insight to organizations tackling today's urgent challenges in education, racial equity, social justice, economic development, and the environment. We earned a gold rating from EcoVadis, the leading platform for environmental, social, and ethical performance ratings for global supply chains, putting us in the top 2% of all companies. Since our founding in 1973, we have measured our success by the success of our clients, and we proudly maintain the highest level of client advocacy in the industry.

Media Contacts:

Katie Ware

Bain & Company

Tel: +1 646 562 8107

katie.ware@bain.com

View original content to download multimedia:https://www.prnewswire.com/news-releases/the-ai-enabled-telco-takes-shape-why-telcos-are-using-artificial-intelligence-to-rollout-their-5g-services-301397181.html

SOURCE Bain & Company

View post:
The AI-Enabled Telco Takes Shape: Why Telcos Are Using Artificial Intelligence To Rollout Their 5G Services - Woburn Daily Times

IBM and Raytheon Technologies to Collaborate on Artificial Intelligence, Cryptography and Quantum Technologies – HPCwire

ARMONK, N.Y.,Oct. 11, 2021 IBM and Raytheon Technologies will jointly develop advanced artificial intelligence, cryptographic and quantum solutions for the aerospace, defense and intelligence industries, including the federal government, as part of a strategic collaboration agreement the companies announced today.

Artificial intelligence and quantum technologies give aerospace and government customers the ability to design systems more quickly, better secure their communications networks and improve decision-making processes. By combining IBMs breakthrough commercial research with Raytheon Technologies own research, plus aerospace and defense expertise, the companies will be able to crack once-unsolvable challenges.

The rapid advancement of quantum computing and its exponential capabilities has spawned one of the greatest technological races in recent history one that demands unprecedented agility and speed, saidDario Gil, senior vice president, IBM, and director of Research. Our new collaboration with Raytheon Technologies will be a catalyst in advancing these state-of-the-art technologies combining their expertise in aerospace, defense and intelligence with IBMs next-generation technologies to make discovery faster, and the scope of that discovery larger than ever.

In addition to artificial intelligence and quantum, the companies will jointly research and develop advanced cryptographic technologies that lie at the heart of some of the toughest problems faced by the aerospace industry and government agencies.

Take something as fundamental as encrypted communications, saidMark E. Russell, Raytheon Technologies chief technology officer. As computing and quantum technologies advance, existing cybersecurity and cryptography methods are at risk of becoming vulnerable. IBM and Raytheon Technologies will now be able to collaboratively help customers maintain secure communications and defend their networks better than previously possible.

The companies are building a technical collaboration team to quickly insert IBMs commercial technologies into active aerospace, defense and intelligence programs. The same team will also identify promising technologies for jointly developing long-term system solutions by investing research dollars and talent.

About IBM

IBM is a leading global hybrid cloud and AI, and business services provider, helping clients in more than 175 countries capitalize on insights from their data, streamline business processes, reduce costs and gain the competitive edge in their industries. Nearly 3,000 government and corporate entities in critical infrastructure areas such as financial services, telecommunications and healthcare rely on IBMs hybrid cloud platform and Red Hat OpenShift to affect their digital transformations quickly, efficiently, and securely. IBMs breakthrough innovations in AI, quantum computing, industry-specific cloud solutions and business services deliver open and flexible options to our clients. All of this is backed by IBMs commitment to trust, transparency, responsibility, inclusivity, and service. For more information, visitwww.ibm.com.

About Raytheon Technologies

Raytheon Technologies Corporation is an aerospace and defense company that provides advanced systems and services for commercial, military and government customers worldwide. With four industry-leading businesses Collins Aerospace Systems, Pratt & Whitney, Raytheon Intelligence & Space and Raytheon Missiles & Defense the company delivers solutions that push the boundaries in avionics, cybersecurity, directed energy, electric propulsion, hypersonics, and quantum physics. The company, formed in 2020 through the combination of Raytheon Company and the United Technologies Corporation aerospace businesses, is headquartered inWaltham, Massachusetts.

Source: IBM

Visit link:
IBM and Raytheon Technologies to Collaborate on Artificial Intelligence, Cryptography and Quantum Technologies - HPCwire

This Week in Washington IP: Ethics in Artificial Intelligence, Challenges with Carbon Removal and the USPTO Hosts the 2021 Hispanic Innovation and…

This week in Washington IP news, Congress is largely quiet except for a hearing of the House Artificial Intelligence Task Force regarding ethical frameworks for developing artificial intelligence (AI) applications in various industries. Elsewhere in D.C., the Center for Data Innovation explores data driven approaches in addressing e-commerce counterfeits, The Brookings Institution hosts a conversation with Susteons Shantanu Agarwal on the challenges of carbon removal tech, and the U.S. Patent and Trademark Office kicks off the 2021 Hispanic Innovation and Entrepreneurship Program with multiple fireside chats and a panel on building networks and resources available to the community of Hispanic innovators.

U.S. Patent and Trademark Office

Trademark Basics Boot Camp, Module 2: Registration Process Overview

At 2:00 PM on Tuesday, online video webinar.

This workshop, the second in the USPTOs eight-part Trademark Basics Boot Camp series, is designed to teach small business owners and entrepreneurs about different aspects of the trademark registration process. Topics covered in this workshop include trademark basics, application workflow, timeline overview and post-registration workflow overview.

House Task Force on Artificial Intelligence

Beyond I, Robot: Ethics, Artificial Intelligence, and the Digital Age

At 12:00 PM on Wednesday, online video webinar.

Ethics in robotics and artificial intelligence systems draws much of its foundation from the three laws of robotics developed by famed science fiction writer Isaac Asimov, which are predicated on the idea that AI systems are always meant to serve humans and never to harm them. With the advent of many AI technologies now upon us, several organizations have been developing ethical frameworks for AI applications that rely upon constant evaluation by human decision-makers and great transparency about the underlying goals guiding the development of particular algorithms. The witness panel for this hearing will include Meredith Broussard, Associate Professor, Arthur L. Carter Journalism Institute, New York University; Meg King, Director, Science and Technology Innovation Program, The Wilson Center; Miriam Vogel, President and CEO, EqualAI; Jeffrey Yong, Principal Advisor, Financial Stability Institute, Bank for International Settlements; and Aaron Cooper, Vice President for Global Policy, BSA The Software Alliance.

U.S. Patent and Trademark Office

2021 Hispanic Innovation and Entrepreneurship Program

At 1:00 PM on Wednesday, online video webinar.

This event features various leaders from the Hispanic community in innovation and entrepreneurship and offers an overview of innovation resources that are available to the innovation community. This event will feature a pair of fireside chats featuring Alejandra Y. Castillo, Assistant Secretary of Commerce for Economic Development; Nestor Ramirez, Technology Center Director, USPTO; Leandro Margulis, Inventor of Durable Radio-Frequency Identification (RFID) Device; and Marivelisse Santiago-Cordero, Senior Advisor to the Deputy Commissioner for Patents, USPTO. This event will also feature a discussion about building networks and finding mentors with a panel including Jennifer Garcia, COO, Latin Business Action Network, Stanford Latino Entrepreneurship Initiative; Olga Carmargo, CEO and Founder, FARO Associates LLC and Board Chair, Hispanic Alliance for Career Enhancement; Susana G. Baumann, President and CEO and Editor-in-Chief, Latinas in Business Inc.; Tito Leal, CFO, Prosperity Lab; and moderated by Juan Valentin, Education Program Advisor, Office of Education, USPTO.

The Brookings Institution

Carbon Removal Innovations and Their Challenges: A Conversation With Susteon President Shantanu Agarwal

At 2:00 PM on Wednesday, online video webinar.

Carbon removal technologies that can sequester airborne sources of carbon have the potential to play a critical role in mitigating climate change, but several promising carbon removal innovations remain stuck in basic research phases far from the commercialization pipeline. This event, part of The Brookings Institutions Reimagining Modern-Day Markets and Regulations series, will feature a fireside chat with Shantanu Agarwal, Co-Founder and President of climate impact technology firm Susteon Inc. Moderating the discussion with Agarwal will be Sanjay Patnaik, Director, Center on Regulations and Markets, and the Bernard L. Schwartz Chair in Economic Policy Development, Fellow, Economic Studies.

Center for Data Innovation

A Data-Driven Approach to Combatting Counterfeit Goods in E-Commerce

At 1:00 PM on Thursday, online video webinar.

E-commerce has proved to be a boon to counterfeiters looking to exploit popular brands and fool American consumers into purchasing knockoff goods. This event will explore a new report issued by the National Intellectual Property Rights Center discussing the marketplace response to best practices developed by public and private entities looking to stem the tide of counterfeits sold via online platforms. This event will feature a discussion with a panel including Matthew C. Allen, Director, National Intellectual Property Rights Coordination Center; Christa Brozowski, Senior Manager of Public Policy, Amazon; Sara Decker, Director of Federal Government Affairs, Walmart; Piotr Stryszowski, Senior Economist, OECD; and moderated by Daniel Castro, Director, Center for Data Innovation.

U.S. Patent and Trademark Office

The Path to a Patent, Part II: Drafting Provisional Patent Applications

At 2:00 PM on Thursday, online video webinar.

This workshop, the second in the USPTOs eight part Path to a Patent series, is designed to teach prospective patent applicants about the key differences between provisional and nonprovisional patent applications. Topics covered include filing requirements, fees and different ways to file a provisional patent application.

Hudson Institute

Powering Innovation: Advanced Batteries and Critical Supply Chains

At 2:30 PM on Thursday, online video webinar.

Both the United States and China have been taking action on securing supply chains on certain products and components that are critical to national security, advanced batteries being one of the sectors identified by both nations as a supply chain priority. Advanced battery technologies have potential applications in electric vehicles, which many governments have been subsidizing to meet climate and emissions goals, as well as in national defense by enabling distributed operations in battlefield scenarios. The first panel for this event, discuss distributed operations and advanced batteries, will include Heather Penny, Senior Fellow, Mitchell Institute for Aerospace Studies; LTG Eric Wesley (Ret.), Former Deputy Commanding General, Army Futures Command, and Director, Futures and Concepts Center; Bryan Clark, Senior Fellow and Director, Center for Defense Concepts and Technology, Hudson Institute; and moderated by Nadia Schadlow, Senior Fellow, Hudson Institute. The second panel, discussing the U.S. governments role in promoting innovation, will include the Honorable Ellen Lord, Former Undersecretary of Defense for Acquisition and Sustainment; the Honorable Kimberly Reed, Former Chairman of the Board of Directors, President and CEO, U.S. Import-Export Bank; Mike Brown, Director, Defense Innovation Unit, U.S. Department of Defense; and moderated by Arthur Herman, Senior Fellow and Director, Quantum Alliance Initiative, Hudson Institute. The third panel, discussing China, supply chains and economic coercion, will include Anthony Vinci, Adjunct Senior Fellow, CNAS; Pavneet Singh, Non-Resident Senior Fellow, The Brookings Institution; John Lee, Senior Fellow, Hudson Institute; and moderated by Nadia Schadlow, Senior Fellow, Hudson Institute.

Information Technology & Innovation Foundation

Can GDPRs Automated Decision Opt-Out Be Improved Without Harming Users?

At 10:00 AM on Friday, online video webinar.

In the nearly two years that have elapsed since the UK government completed their Brexit tradition out of the European Union, the country has been charting its own course forward on legal matters and in recent weeks the UK government has been eyeing changes to Article 22 of the countrys General Data Protection Regulation (GDPR). Article 22 of the GDPR governs restrictions to automated processing of decisions for a data subject, and the UK governments moves have opened a discussion on the feasibility of changing protections against automated decision-making processes. This event will feature a discussion with a panel including Omar Tene, Former Vice President, International Association of Privacy Professionals; Isabelle de Pauw, Head of Data Rights, Domestic Data Protection and Data Rights Team, Department for Digital, Culture, Media and Sport; Chris Elwell-Sutton, Senior Privacy Counsel and Data Protection Officer, CIBC Capital Markets; Andrew Orlowski, Technology Commentator, Daily Telegraph; Kristian Stout, Director of Innovation Policy, International Center for Law & Economics; and moderated by Benjamin Mueller, Senior Policy Analyst, Center for Data Innovation.

U.S. Patent and Trademark Office

Attend the Trademark Public Advisory Committee Quarterly Meeting

At 10:00 AM on Friday, online video webinar.

On Friday morning, the Trademark Public Advisory Committee (TPAC) of the USPTO will convene their quarterly meeting to discuss issues related to the agencys trademark activities, including a review of policies, goals, budget, performance and user fees.

Image Source: Deposit PhotosAuthor: sborisovImage ID: 30853945

Read the original post:
This Week in Washington IP: Ethics in Artificial Intelligence, Challenges with Carbon Removal and the USPTO Hosts the 2021 Hispanic Innovation and...

NASA to Use Artificial Intelligence to Discover Rogue Exoplanets Wandering the Galaxy – Newsweek

Researchers have developed a new method to detect rogue planets outside the solar system, worlds that wander their galaxies alone without a parent star.

The technique, devised by NASA Goddard Space Flight Center scientist, Richard K. Barry, unites astronomy's futurein the form of the soon-to-launch Nancy Grace Roman Space Telescopewith its past, a method used by 19th-century astronomers to measure distances.

The Contemporaneous LEnsing Parallax and Autonomous TRansient Assay (CLEoPATRA) mission will use parallax to measure distances, but the method will be bolstered by artificial intelligence (AI) developed by Dr. Greg Olmschenk.

Olmschenk's program, RApid Machine learnEd Triage (RAMjET), will learn patterns through provided examples filtering out useless information and ensuring that of the millions of stars observed by CLEoPATRA per hour, only useful information is transmitted back to Earth.

Recent research published in The Astronomical Journal suggests that exoplanets that exist in the Universe without a parent star could be more common than stars themselves, but until now spotting them has been difficult.

"The difficulty with detecting rogue planets is that they emit essentially no light. Since detecting light from an object is the main tool astronomers use to find objects, rogue planets have been elusive," the author of that paper and Thomas Jefferson professor for Discovery and Space Exploration at Ohio State University, Scott Gaudi, told Newsweek.

The most powerful method of spotting exoplanetsplanets outside the solar systemis through the dips in light they cause as they pass in front of their parent stars. This transit method has resulted in the discovery of thousands of worlds added to the exoplanet catalog, but it doesn't work for planets that don't have host stars.

One way to spot rogue exoplanets is to wait until they cross between a distant Milky Way star and our telescopes here on Earth intercepting the light from that star. When this happens, a phenomenon called gravitational lensing, the bending of light caused by a massive object, actually causes the light from that star to brighten.

CLEoPATRA will exploit this brightening, which is called microlensing when it involves a lensing object of small mass like a planet, and use parallax to measure the distance to these rogue worlds.

"Roman [Space Telescope] will use a technique called gravitational microlensing to find rogue planets, which relies only on the gravity and thus the mass of the planet, and doesn't require detecting any light from the planet," Gaudi said

As microlensing events are both unpredictable and exceedingly rare, a telescope must monitor hundreds of millions of stars nearly continuously to spot them. And that takes a wide-field space telescope like Nancy Grace Roman Space Telescope.

Parallax is the apparent shift in the position of an object when it is observed from different positions. The most familiar example of this is holding a finger close to our face and looking at it with one eye, and then switching to the other. The finger will look like it has moved.

Astronomers in the 19th century used this phenomenon to measure the distances to close stars by observing how their positions shifted according to the background of more distant stellar objects.

Using parallax in conjunction with microlensing events works slightly differently, with separated observers relying on precisely synchronized clocks to measure the differences in time between their observations of the event. This time delay then allows observers to calculate the distance to the lensing exoplanet as well as its mass and size.

"CLEoPATRA would be at a great distance from the principal observatory, either Roman or a telescope on Earth," Barry said in a NASA press release. "The parallax signal should then permit us to calculate quite precise masses for these objects, thereby increasing scientific return."

The benefit in spotting rogue exoplanets isn't just increasing the already burgeoning exoplanet catalog. Exploring these worlds could also teach us more about how the planets in our solar system, including Earth, formed and evolved.

"We want to find multiple free-floating planets and try to obtain information about their masses, so we can understand what is common or not common at all," research assistant at Goddard and Ph.D. student at the Catholic University of America in Washington, Stela Ishitani Silva, said. "Obtaining the mass is important to understanding their planetary development."

If all goes according to plan, CLEoPATRA will launch on a Mars mission around the same time as the launch of the Nancy Grace Roman Space Telescope currently set for the mid-2020s.

"CLEoPATRA will permit us to estimate many high-precision masses for new planets detected by Roman and PRIME," said Barry. "And it may allow us to capture or estimate the actual mass of a free-floating planet for the first timenever been done before. So cool, and so exciting. Really, it's a new golden age for astronomy right now, and I'm just very excited about it."

Go here to see the original:
NASA to Use Artificial Intelligence to Discover Rogue Exoplanets Wandering the Galaxy - Newsweek

Artificial intelligence is now part of our everyday lives and its growing power is a double-edged sword Stuff – Stuff Magazines

A major new report on the state of artificial intelligence (AI) has just beenreleased. Think of it as the AI equivalent of an Intergovernmental Panel on Climate Change report, in that it identifies where AI is at today, and the promise and perils in view.

From language generation and molecular medicine to disinformation and algorithmic bias, AI has begun to permeate every aspect of our lives.

The report argues that we are at an inflection point where researchers and governments must think and act carefully to contain the risks AI presents and make the most of its benefits.

The report comes out of theAI100 project, which aims to study and anticipate the effects of AI rippling out through our lives over the course of the next 100 years.

AI100 produces a new report every five years: the first was published in 2016, and this is the second. As two points define a line, this second report lets us see the direction AI is taking us in.

One of us (Liz Sonenberg) is a member of the standing committee overseeing the AI100 project, and the other (Toby Walsh) was on the study panel that wrote this particular report. Members of the panel came from across the world, with backgrounds in computer science, engineering, law, political science, policy, sociology and economics.

The report highlights the remarkable progress made in AI over the past five years. AI is leaving the laboratory and has entered our lives, having a real-world impact on people, institutions, and culture. Read the news on any given day and youre likely to find multiple stories about some new advance in AI or some new use of AI.

For example, in natural language processing (NLP), computers can now analyse and even generate realistic human language. To demonstrate, we askedOpen AIs GPT-3 system, one of the largest neural networks ever built, to summarise the AI100 report for you. It did a pretty good job, even if the summary confronts our sense of self by being written in the first person:

In the coming decade, I expect that AI will play an increasingly prominent role in the lives of people everywhere. AI-infused services will become more common, and AI will become increasingly embedded in the daily lives of people across the world.

I believe that this will bring with it great economic and societal benefits, but that it will also require us to address the many challenges to ensure that the benefits are broadly shared and that people are not marginalised by these new technologies.

A key insight of AI research is that it is easier to build things than to understand why they work. However, defining what success looks like for an AI application is not straightforward.

For example, the AI systems that are used in healthcare to analyse symptoms, recommend diagnoses, or choose treatments are often far better than anything that could be built by a human, but their success is hard to quantify.

As a second example of the recent and remarkable progress in AI, consider the latest breakthrough from Googles DeepMind. AlphaFoldis an AI program that provides a huge step forward in our ability to predict how proteins fold.

This will likely lead to major advances in life sciences and medicine, accelerating efforts to understand the building blocks of life and enabling quicker and more sophisticated drug discovery. Most of the planet now knows to their cost how the unique shape of the spike proteins in the SARS-CoV-2 virus are key to its ability to invade our cells, and also to the vaccines developed to combat its deadly progress.

The AI100 report argues that worries about super-intelligent machines and wide-scale job loss from automation are still premature, requiring AI that is far more capable than available today. The main concern the report raises is not malevolent machines of superior intelligence to humans, but incompetent machines of inferior intelligence.

Once again, its easy to find in the news real-life stories of risks and threats to our democratic discourse and mental health posed by AI-powered tools. For instance, Facebook uses machine learning to sort its news feed and give each of its 2 billion users an unique but often inflammatory view of the world.

Its clear were at an inflection point: we need to think seriously and urgently about the downsides and risks the increasing application of AI is revealing. The ever-improving capabilities of AI are a double-edged sword. Harms may be intentional, like deepfake videos, or unintended, like algorithms that reinforce racial and other biases.

AI research has traditionally been undertaken by computer and cognitive scientists. But the challenges being raised by AI today are not just technical. All areas of human inquiry, and especially the social sciences, need to be included in a broad conversation about the future of the field. Minimising negative impacts on society and enhancing the positives requires consideration from across academia and with societal input.

Governments also have a crucial role to play in shaping the development and application of AI. Indeed, governments around the world have begun to consider and address the opportunities and challenges posed by AI. But they remain behind the curve.

A greater investment of time and resources is needed to meet the challenges posed by the rapidly evolving technologies of AI and associated fields. In addition to regulation, governments also need to educate. In an AI-enabled world, our citizens, from the youngest to the oldest, need to be literate in these new digital technologies.

At the end of the day, the success of AI research will be measured by how it has empowered all people, helping tackle the many wicked problems facing the planet, from the climate emergency to increasing inequality within and between countries.

AI will have failed if it harms or devalues the very people we are trying to help.

Original post:
Artificial intelligence is now part of our everyday lives and its growing power is a double-edged sword Stuff - Stuff Magazines

Bias in AI: Algorithm Bias in AI Systems to Know About 2021 – Datamation

The link between artificial intelligence (AI) and bias is alarming.

As AI evolves to become more human-like, its becoming clear that human bias is impacting technology in negative, potentially dangerous ways.

Here, we explore how AI and bias are linked and whats being done to reduce the impact of bias in AI applications:

See more: The Ethics of Artificial Intelligence (AI)

Using AI in decision-making processes has become commonplace, mostly because predictive analytics algorithms can perform the work of humans at a much faster and often more accurate rate. Decisions are being made by AI on small matters, like restaurant preferences, and critical issues, like determining which patient should receive an organ donation.

While the stakes may differ, whether human bias is playing a role in AI decisions is sure to impact outcomes. Bad product recommendations impact retailer profit, and medical decisions can directly impact individual patient lives.

Vincent C. Mller takes a look at AI and bias in his research paper, Ethics of Artificial Intelligence and Robotics, included in the Summer 2021 edition of The Stanford Encyclopedia of Philosophy. Fairness in policing is a primary concern, Mller says, noting that human bias exists in the data sets used by police to decide, for example, where to focus patrols or which prisoners are likely to re-offend.

This kind of predictive policing, Mller says, relies heavily on data influenced by cognitive biases, especially confirmation bias, even when the bias is implicit and unknown to human programmers.

Christina Pazzanese refers to the work of political philosopher Michael Sandel, a professor of government, in her article, Great promise but potential for peril, in The Harvard Gazette.

Part of the appeal of algorithmic decision-making is that it seems to offer an objective way of overcoming human subjectivity, bias, and prejudice, Sandel says. But we are discovering that many of the algorithms that decide who should get parole, for example, or who should be presented with employment opportunities or housing replicate and embed the biases that already exist in our society.

See more: Artificial Intelligence: Current and Future Trends

To figure out how to remove or at least reduce bias in AI decision-making platforms, we have to consider why it exists in the first place.

Take the AI chatbot training story in 2016. The chatbot was set up by Microsoft to hold conversations on Twitter, interacting with users through tweets and direct messaging. In other words, the general public had a large part in determining the chatbots personality. Within a few hours of its release, the chatbot was replying to users with offensive and racist messages, having been trained on anonymous public data, which was immediately co-opted by a group of people.

The chatbot was heavily influenced in a conscious way, but its often not so clear-cut. In their joint article, What Do We Do About the Biases in AI in the Harvard Business Review, James Manyika, Jake Silberg, and Brittany Presten say that implicit human biases those which people dont realize they hold can significantly impact AI.

Bias can creep into algorithms in several ways, the article says. It can include biased human decisions or reflect historical or social inequities, even if sensitive variables such as gender, race, or sexual orientation are removed. As an example, the researchers point to Amazon, which stopped using a hiring algorithm after finding it favored applications based on words like executed or captured, which were more commonly included on mens resumes.

Flawed data sampling is another concern, the trio writes, when groups are overrepresented or underrepresented in the training data that teaches AI algorithms to make decisions. For example, facial analysis technologies analyzed by MIT researchers Joy Buolamwini and Timnit Gebru had higher error rates for minorities, especially minority women, potentially due to underrepresented training data.

In the McKinsey Global Institute article, Tackling bias in artificial intelligence (and in humans), Jake Silberg and James Manyika lay out six guidelines AI creators can follow to reduce bias in AI:

The researchers acknowledge that these guidelines wont eliminate bias altogether, but when applied consistently, they have the potential to significantly improve on the situation.

See more: Top Performing Artificial Intelligence Companies

Read the original:
Bias in AI: Algorithm Bias in AI Systems to Know About 2021 - Datamation

Artificial intelligence is now part of our everyday lives and its growing power is a double-edged sword – The Conversation AU

A major new report on the state of artificial intelligence (AI) has just been released. Think of it as the AI equivalent of an Intergovernmental Panel on Climate Change report, in that it identifies where AI is at today, and the promise and perils in view.

From language generation and molecular medicine to disinformation and algorithmic bias, AI has begun to permeate every aspect of our lives.

The report argues that we are at an inflection point where researchers and governments must think and act carefully to contain the risks AI presents and make the most of its benefits.

The report comes out of the AI100 project, which aims to study and anticipate the effects of AI rippling out through our lives over the course of the next 100 years.

AI100 produces a new report every five years: the first was published in 2016, and this is the second. As two points define a line, this second report lets us see the direction AI is taking us in.

One of us (Liz Sonenberg) is a member of the standing committee overseeing the AI100 project, and the other (Toby Walsh) was on the study panel that wrote this particular report. Members of the panel came from across the world, with backgrounds in computer science, engineering, law, political science, policy, sociology and economics.

The report highlights the remarkable progress made in AI over the past five years. AI is leaving the laboratory and has entered our lives, having a real-world impact on people, institutions, and culture. Read the news on any given day and youre likely to find multiple stories about some new advance in AI or some new use of AI.

For example, in natural language processing (NLP), computers can now analyse and even generate realistic human language. To demonstrate, we asked Open AIs GPT-3 system, one of the largest neural networks ever built, to summarise the AI100 report for you. It did a pretty good job, even if the summary confronts our sense of self by being written in the first person:

In the coming decade, I expect that AI will play an increasingly prominent role in the lives of people everywhere. AI-infused services will become more common, and AI will become increasingly embedded in the daily lives of people across the world.

I believe that this will bring with it great economic and societal benefits, but that it will also require us to address the many challenges to ensure that the benefits are broadly shared and that people are not marginalised by these new technologies.

A key insight of AI research is that it is easier to build things than to understand why they work. However, defining what success looks like for an AI application is not straightforward.

For example, the AI systems that are used in healthcare to analyse symptoms, recommend diagnoses, or choose treatments are often far better than anything that could be built by a human, but their success is hard to quantify.

Read more: GPT-3: new AI can write like a human but don't mistake that for thinking neuroscientist

As a second example of the recent and remarkable progress in AI, consider the latest breakthrough from Googles DeepMind. AlphaFold is an AI program that provides a huge step forward in our ability to predict how proteins fold.

This will likely lead to major advances in life sciences and medicine, accelerating efforts to understand the building blocks of life and enabling quicker and more sophisticated drug discovery. Most of the planet now knows to their cost how the unique shape of the spike proteins in the SARS-CoV-2 virus are key to its ability to invade our cells, and also to the vaccines developed to combat its deadly progress.

The AI100 report argues that worries about super-intelligent machines and wide-scale job loss from automation are still premature, requiring AI that is far more capable than available today. The main concern the report raises is not malevolent machines of superior intelligence to humans, but incompetent machines of inferior intelligence.

Once again, its easy to find in the news real-life stories of risks and threats to our democratic discourse and mental health posed by AI-powered tools. For instance, Facebook uses machine learning to sort its news feed and give each of its 2 billion users an unique but often inflammatory view of the world.

Its clear were at an inflection point: we need to think seriously and urgently about the downsides and risks the increasing application of AI is revealing. The ever-improving capabilities of AI are a double-edged sword. Harms may be intentional, like deepfake videos, or unintended, like algorithms that reinforce racial and other biases.

AI research has traditionally been undertaken by computer and cognitive scientists. But the challenges being raised by AI today are not just technical. All areas of human inquiry, and especially the social sciences, need to be included in a broad conversation about the future of the field. Minimising negative impacts on society and enhancing the positives requires consideration from across academia and with societal input.

Governments also have a crucial role to play in shaping the development and application of AI. Indeed, governments around the world have begun to consider and address the opportunities and challenges posed by AI. But they remain behind the curve.

A greater investment of time and resources is needed to meet the challenges posed by the rapidly evolving technologies of AI and associated fields. In addition to regulation, governments also need to educate. In an AI-enabled world, our citizens, from the youngest to the oldest, need to be literate in these new digital technologies.

At the end of the day, the success of AI research will be measured by how it has empowered all people, helping tackle the many wicked problems facing the planet, from the climate emergency to increasing inequality within and between countries.

AI will have failed if it harms or devalues the very people we are trying to help.

The rest is here:
Artificial intelligence is now part of our everyday lives and its growing power is a double-edged sword - The Conversation AU