Page 81«..1020..80818283..90100..»

Category Archives: Artificial Intelligence

Industry VoicesWhy the COVID-19 pandemic was a watershed moment for machine learning – FierceHealthcare

Posted: September 1, 2021 at 12:28 am

Times of crisis spark innovation and creativity, as evidenced in the way organizations have come together to innovate for the greater good during the COVID-19 pandemic.

Liquor distilleries started producing hand sanitizer, 3D printing companies made face shields and nasal swabs to meet massive demandsand auto companies shifted gears to make ventilators.

Machine learning (ML)computer systems that learn and adapt autonomously by using algorithms and statistical models to analyze and draw inferences from patterns in data to inform and automate processeshas also played an important role, supporting practically every aspect of healthcare. Amazon Web Services has supported customers as they enable remote patient care, develop predictive surge planning to help manage inpatient/ICU bed capacityand tackle the unprecedented feat of developing an messenger ribonucleic acid (mRNA)-based COVID-19 vaccine in under a year.

We now have the opportunity to build on our lessons from the past year to apply ML to help address several underlying problems that plague the healthcare and life sciences communities.

Telehealth was on the rise before COVID-19, but it revealed its true potential during the pandemic. Telehealth is often viewed simply as patients and providers interacting online via video platforms but has proven capable of doing much more. Applying ML to telehealth provides a unique opportunity to innovate, scale and offer more personalized experiences for patients and ensure they have access to the resources and care they need, no matter where they're located.

ML-based telehealth tools such as patient service chatbots, call center interactions to better triage and direct patients to the information and care they requireand online self-service prescreenings are helping optimize patient experiences and streamline provider assessments and diagnostics.

RELATED:Global investment in telehealth, artificial intelligence hits a new high in Q1 2021

For example, GovChat, South Africa's largest citizen engagement platform, launched a COVID-19 chatbot in less than two weeks using an artificial intelligence (AI) service for building conversational interfaces into any application using voice and text. The chatbot provides health advice and recommendations on whether to get a test for COVID-19, information on the nearest COVID-19 testing facility, the ability to receive test resultsand the option for citizens to report COVID-19 symptoms for themselves, their family membersor other household members.

In addition, early in the COVID-19 crisis, New York City-based MetroPlusHealth identified approximately 85,000 at-risk individuals (e.g., comorbid heart or lung disease, or immunocompromised) who would require additional support services while sheltering in place. In order to engage and address the needs of this high-risk population, MetroPlusHealth developed ML-enabled solutions including an SMS-based chatbot that guides people through self-screening and registration processes, SMS notification campaigns to provide alerts and updated pandemic informationand a community-based organizations referral platform, called Now Pow, to connect each individual with the right resource to ensure their specific needs were met.

By providing an easy way for patients to access the care, recommendationsand support they need, ML has given providers the ability to innovate and scale their telehealth platforms to support diverse and continuously changing community needs. Agile, scalableand accessible telehealth continues to be important as providers look for ways to reach and engage patients in hard-to-reach or rural areas and those with mobility issues. Organizations and policymakers globally need to make telehealth and easy access to care a priority now and going forward in order to close critical gaps in care.

Beyond the unprecedented shifts in the approach to engaging, supporting and treating patients, COVID-19 has dictated clear direction for the future of patient care: precision medicine.

Guidelines for patient care planning care have shifted from statistically significant outcomes gathered from a general population to outcomes based on the individual. This gives clinicians the ability to understand what type of patient is most prone to have a disease, not just what sort of disease a specific patient has. Being able to predict the probability of contracting a disease far in advance of its onset is important to determining and initiating preventative, intervening, and corrective measures that can be tailored to each individual's characteristics.

RELATED:What's on the horizon for healthcare beyond COVID-19? Cerner, Epic and Meditech executives share their takes

One of the best examples of how ML is enabling precision medicine is biotech company Modernas ability to accelerate every step of the process in developing an mRNA vaccine for COVID-19. Moderna began work on its vaccine the moment the novel coronaviruss genetic sequence was published. Within days, the company had finalized the sequence for its mRNA vaccine in partnership with the National Institutes of Health.

Moderna was able to begin manufacturing the first clinical-grade batch of the vaccine within two months of completing the sequencinga process that historically has taken up to 10 years.

Personalized health isn't only about treating disease, it's about providing access to resources and information specific to a patient's needs. ML is playing a key role in curating content that can help to educate and support patients, caregivers and their families.

Breastcancer.org allows individuals with breast cancer to upload their pathology report to a private and secure personal account. The organization uses ML-based natural language processing to analyze and understand the report and create personalized information for the patient based on their specific pathology.

RELATED:Healthcare AI investment will shift to these 5 areas in the next 2 years: survey

For the last decade, organizations have focused on digitizing healthcare. Today, making sense of the data being captured will provide the biggest opportunity to transform care. Successful transformation will depend on enabling data to flow where it needs to be at the right time while ensuring that all data exchange is secure.

Interoperability is by far one of the most important topics in this discussion. Today, most healthcare data is stored in disparate formats (e.g., medical histories, physician notes and medical imaging reports), which makes extracting information challenging. ML models trained to support healthcare and life sciences organizations help solve this problem by automatically normalizing, indexing, structuring and analyzing data.

ML has the potential to bring data together in a way that creates a more complete view of a patient's medical history, making it easier for providers to understand relationships in the data and compare specific data to the rest of the population. Better data management and analysis leads to better insights, which lead to smarter decisions. The net result is increased operational efficiency for improved care delivery and management, and most importantly, improved patient experiences and health outcomes.

Looking ahead, imagine a time when our pernicious medical conditions like cancer and diabetes can be treated with tailored medicines and care plans enabled by AI and ML. The pandemic was a turning point for how ML can be applied to tackle some of the toughest challenges in the healthcare industry, though we've only just scratched the surface of what it can accomplish.

Taha Kass-Hout is the director of machine learning for Amazon Web Services.

Follow this link:

Industry VoicesWhy the COVID-19 pandemic was a watershed moment for machine learning - FierceHealthcare

Posted in Artificial Intelligence | Comments Off on Industry VoicesWhy the COVID-19 pandemic was a watershed moment for machine learning – FierceHealthcare

The Need of A Real-World Artificial Intelligence in The Pandemic Era – BBN Times

Posted: at 12:28 am

The Covid-19 pandemic has accelerated the development of artificial intelligence across the globe.

Organizations are using artificial intelligence to increase the productivity ofremote workers, enhance the virtual shopping experience, drive the digital transformation process and speed up the development of important drugs to end this on-going pandemic.

Real artificial intelligence is creating value by making humans more efficient, not redundant.

There are several levels ofknowledge, research, education, theory, practice, and technology:

Specialization: Narrow AI, Specialists, Scientists, Learned Ignoramus, which divides, specializes, and thinks inspecialcategories.

Disciplinarity: Analytical science and traditionally fragmenteddisciplines.

Interdisciplinarity: Itintegrates information, data, techniques, tools, concepts, and/or theories from within two or more disciplines.

Interdisciplinarity is about the interactions between specialised fields and cooperation among special disciplines to solve a specific problem. It concerns the transfer of methods and concepts from one discipline to another, allowing research to spill over disciplinary boundaries, still staying within the framework of disciplinary research.

Transdisciplinarity:Synthetic science and technology and society,the ideas of a unified scienceand technology and human society,universalknowledge, synthesis and the integration ofallknowledge, total convergence of knowledge, technology and people, Trans-AI = Narrow AI, ML, DL + Symbolic AI + Human Intelligence.

Transdisciplinarity is radically distinct from interdisciplinarity, multidisciplinarity and mono-disciplinarity.

Transdisciplinarity analyzes,synthesizes and harmonizes links between disciplines into a coordinated and coherent whole, a global system where all interdisciplinary boundaries dissolve.

It is aboutaddressingthe worlds most pressing issuesandseeing the worldin asystemic,consistent, andholisticway at three levels:

(1) theoretical, (2) phenomenological, and (3) experimental (which is based on existing data in a diversity of fields, such asexperimental science and technology, business,education, art, and literature).

Transdisciplinarity is a way of being radically distinct from interdisciplinarity, as well as multidisciplinarity and mono-disciplinarity.

Transdisciplinarity integrates the natural, social, andengineeringsciences in aunifyingcontext, a whole that is greater than the sum of its partsand transcends their traditional boundaries.

Transdisciplinarityconnotes a research strategy that crosses many disciplinary boundaries to create a holistic approach.

Transdisciplinary research integrates information, data, concepts, theories,techniques, tools, technologies, people, organizations, policies, and environments,asall sides of the real-world problems.

Transdisciplinarity takes this integration of disciplines on the highest level. It is a holistic approach, placing these interactions in an integral system. It thus builds a total network of individual disciplines, with a view to understand the world in terms of integrity and unity and discovery.

Monodisciplinary: Itinvolvesa single academic discipline.Itrefers to a single discipline or body of specialized knowledge.

Multidisciplinarity: Itdraws on knowledge from different disciplines but stays within their boundaries.Inmultidisciplinarity, two or more disciplines work together on a common problem, but without altering their disciplinary approaches or developing a common conceptual framework.

In the context oftheunprecedented worldwidepandemic-enhancedcrises, the transdisciplinarityappears asan all-sustainableway ofsolving complex real-world problemspursuinga general search for a unity of knowledgeor Real-World AI.

The Trans-AI paradigm means that the classic studies of Plato, Aristotle, Kant, Leibnizs Logic as Calculation and Booles Logic as Algebra withmodern ontological, scientific, mathematical and statistical research of reality/knowledge/intelligence/data formalization/computing/automation are a key to [Real] AI.

For example, the conception of AI was inherently implied in Aristotles Analytics, Prior and Posterior, Metaphysics/Ontology and Categories.

Without the reality/category theory, as the mind theory for human minds, and prior data analytics, no deep AI/ML/DL classifiers with effective classification algorithms are possible, where classes are targets, labels, or categories. ML/DL predictive modeling is NOT just the task of approximating a mapping function (f) from input variables (X) to output variables (y). Therefore, it is widely recognized that the lack of reality with causality is the black hole of current machine learning systems.

The Trans-AI is about the real-world data ontology, causality, real intelligence, science, computer model, semantics and syntax and pragmatics, universal knowledge/data synthesis vs. expert knowledge/data analytics, thus enabling a comprehensive machine understanding of data points, elements, sets, patterns, and relationships.

Without comprehensive causal worlds models integrating disciplinary, inter-, multi-, and trans-disciplinary knowledge, there is no real-world AI. A holistic research strategy integrating worlds knowledge into a meaningful whole is the systematic way of building the General Human-AI Platform as an Integrative General-Purpose Technology.

The current disciplinary approach to AI/ML/DL and Robotics is, at best or worst for humanity, ending up with superhuman narrow human-mimicking AI applications, integrated in our smart networks, devices. processes and services.

Some, who limit AI as augmenting or substituting biological intelligence with machine intelligence, believe transdisciplinarity is a way to a human-level AI.

The mono-disciplinary narrow AI of machine deep learning is blooming today, bringing its stakeholders unprecedented profits.Five top-performing tech stocks in the market, namely, Facebook, Amazon, Apple, Microsoft, and Alphabets Google, FAAMG, represent the U.S.'sNarrow AI technology leaderswhose productsspan machine learning and deep learning or data analytics cloud platforms, mobile and desktop systems, hosting services, online operations, and software products. The five FAAMG companies had a joint market capitalization of around $4.5 trillion a year ago, and now exceed $7.6 trillion, being all within the top 10 companies in the US.As to the modest Gartner's predictions, the total NAI-derived business value is forecast to reach $3.9 trillion in 2022.

The future superhuman narrow AI applications are here, within us, in our smart networks, devices. processes and services.

Special-designed automated intelligence outperforms humans in strategic games, chess/go playing, video gaming, self-driving mobility, stock trading, financial transactions, medical diagnosis, NLP, language translation, patterns/object/face recognition, manufacturing processes, etc.

And it is ONLY the narrow AI/ML/DL fragmented applications designed for narrow human-like tasks and jobs, as more efficient and effective than human labor, mental or menial.

The existential question isWhen Will Robots/Machines/Computers Emerge as a General-Purpose Real-World AI?

But most people are still blind to see the disruptive fundamental force of AI technology, its critical impact on our future.

Our company is proud to inform that EIS Encyclopedic Intelligent Systems LTD has completed studying, modeling, and designing the Real-World AI as a Causal Machine Intelligence and Learning, trademarked as Causal Artificial Superintelligence (CASI) GPT Platform complementing human intelligence, collective and individual.

Thecurrent disciplinary approach to AI/ML/DL and Robotics is ending up with superhumannarrow AI applications,integratedin our smart networks, devices. processes and services.

Special-designed automated intelligence outperforms humans in strategic games, chess/go playing, video gaming, self-driving mobility, stock trading, financial transactions, medical diagnosis, NLP, language translation, patterns/object/face recognition, manufacturing processes, etc.

It isstillONLY the narrowAnthropomorphic and AnthropocentricAI/ML/DL fragmented applications designed for narrow human-like tasks and jobs.Many scientists are trying to move the field of AI beyond data analytics, predictions and pattern-matching towards machines that could solve real-world problems. Some people think it might be enough to take what we have and just grow the size of the dataset, the model sizes, computer speedto just get a bigger brain (Conference on Neural Information Processing Systems (NeurIPS 2019) Yoshua Bengio)

Still, theexistentialquestionis open: What IfRobots/Machines/Computerswere toOutsmartHumans in allspecialrespects?

To address themoral and existentialissues ofdisciplinaryAI/ML/DL and robotics fragmentation,as Europes Responsible and Trustworthy AI,we have developeda TransdisciplinaryRealAI model, as not competing with, but complementing human intelligence.

The Transdisciplinary AIConferences are now emerging,but still considered as an interdisciplinary collection ofacademic research themes:

Transdisciplinary AI 2021 (TransAI 2021) is technically sponsored by the IEEE Computer Society.

Trans-AI aims to integrate disciplinary AIs, symbolic/logical or statistic/data, asML Algorithms (DL,ANNs), which are designed to substitute biological intelligence with machine intelligence.

Trans-AI is developed as a Man-Machine Global AI (GAI) Platform to integrate Human Intelligence with Narrow AI, ML, DL, Human-level AI, or Superhuman AI, all as Neural Information Processing Systems. It relies on fundamental scientific worlds knowledge, cybernetics, computer science, mathematics, statistics, data science, computing ontologies, robotics,psychology, linguistics, semantics, and philosophy.

The Trans AI model is mapped as an interdependent, mutually reinforcing, transdisciplinary quadrivium of the worlds knowledge depicted by the global knowledge graph (see the extended version).

The Trans-AI isa systematic, holistic and analytical means of obtaining knowledge about the world.

The Trans-AI is technologically designed as a Causal Machine Intelligence and Learning Platform, to be served as Artificial Intelligence for Everybody and Everything, AI4EE.

The Trans-AI technology could make the most disruptive general-purpose technology of the 21st Century, given an effective ecosystem of innovative business, government, policy-makers, NGOs, international organizations, civil society, academia, media and the arts.

TheTrans-AI asHuman-AI Global Platform is designed to extract knowledge from massive digital data forcreatingbreakthroughs in all parts of human life, from government to industry to education to healthcare to global security.

It isaimedtoprocess structured and unstructured digital data within unifying world-intelligence-data models and causal algorithms, shifting from supervised to self-supervised real learning. Making breakthroughs in these areas will be the matter of life or death for thefuture ofhumanity.

Why Trans-AI could be the disruptive discovery, innovation and unifying general-purpose technologyand the best smart investment

The Trans-AI could be the most disruptive research and breakthrough discovery, innovation and technology meetingthe founding fathers of AIdreamsto make machines use language, form abstractions and concepts,Google mission to organize the worlds information and make it universally accessible and useful, and best human ambitions for a unified knowledge of the world.

Among other disruptive changes, the Trans-AI enriches, updates and scales up the disciplinary AIs, as proposed by the EC'sHIGH-LEVEL EXPERT GROUP ON ARTIFICIAL INTELLIGENCE:

Artificial intelligence (AI) refers to systems that display intelligent behaviour by analysing their environment and taking actions with some degree of autonomy to achieve specific goals. AI-based systems can be purely software-based, acting in the virtual world (e.g. voice assistants, image analysis software, search engines, speech and face recognition systems) or AI can be embedded in hardware devices (e.g. advanced robots, autonomous cars, drones or Internet of Things applications).

The most concern of humanity must be the current accelerated growth of Big Techs Narrow and Weak AI of Machine Learning, ANNs and Deep Learning, as a Non-Real AI vs. Real World AI. It is fast emerging as narrow-minded automated super intelligences outperforming humans in any narrow cognitive tasks, and implemented as LAWs or military AI, ML/DL drones, killer robots, humanoid robots, self-driving transportation, smart manufacturing machines, RPAs, cyborgs, trading algorithms, smart government decision makers, recommendation engines, medical AI system, etc.

The whole idea of Anthropomorphic and Anthropocentric AI (AAAI) as the narrow or general ones, aimed at simulating human intelligence, cognitive skills, capacities, capabilities, and functions, as well as intelligent behavior and actions in computing machines is raising a number of undecidable social, moral, ethical and legal dilemmas.

The narrow and weak Deep-Learning AI programs classify tremendous amounts of data without any understanding of the world and meaning of their inputs or outputs (e.g., the recommendation to treat or a risk score or behaviour changes).

These consequences could be much worse than human cloning, which is prohibited in most countries, and massive technological unemployment without any compensation effects is just the beginning of the end.

This is what good minds forewarned humanity about the possibilities and possible perils of AAAI, mimicking human learning and reasoning by machines and humanoid robots:

The development of full artificial intelligence could spell the end of the human raceItwould take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldnt compete, and would be superseded. Stephen Hawking told the BBC

I visualise a time when we will be to robots what dogs are to humans, and Im rooting for the machines. Claude Shannon

Im increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we dont do something very foolish. I mean with artificial intelligence were summoning the demon. Elon Musk warned at MITs AeroAstro Centennial Symposium

All that we need, is a radically new kind of AI, Real and True MI, Real World AI, the Trans-AI, which is to simulate and understand or compute reality, causality, and mentality in digital reality machines.

This is becoming clear even for profit-seeking industrialists, as E. Musk, who understands that without the Real-World AI no really intelligent machine is possible. Self-driving requires solving a major part of real-world AI, so its an insanely hard problem, but Tesla is getting it done. AI Day will be great. Nothing has more degrees of freedom than reality.

The rise of real artificial intelligence will create and destroy new jobs, improve healthcare, disrupt smart cities, and minimize the impact of the next pandemic. Despite the concerns about the dark side of artificial intelligence, we are still far away from super artificial intelligence.

The rest is here:

The Need of A Real-World Artificial Intelligence in The Pandemic Era - BBN Times

Posted in Artificial Intelligence | Comments Off on The Need of A Real-World Artificial Intelligence in The Pandemic Era – BBN Times

Why ethics is essential in the creation of artificial intelligence – IT Brief Australia

Posted: at 12:28 am

Article by ManageEngine director of research Ramprakash Ramamoorthy.

Artificial intelligence (AI) has long been a feature of modern technology and is becoming increasingly common in workplace technologies. According to ManageEngines recent 2021 Digital Readiness Survey, more than 86% of organisations in Australia and New Zealand reported increasing their use of AI even as recently as two years ago.

But despite an increased uptake across organisations in the A/NZ region, only 25% said their confidence in the technology had significantly increased.

One possible reason for the lack of overall confidence in AI is the potential for unethical biases to work their way into developing AI technologies. While it may be true that nobody sets out to build an unethical AI model, it may only take a few cases for disproportionate or accidental weighting to be applied to certain data types over others, creating unintentional biases.

Demographic data, names, years of experience, known anomalies, and other types of personally identifiable information are the types of data that can skew AI and lead to biased decisions. In essence, if AI is not properly designed to work with data, or the data provided is not clean, this can lead to the AI model generating predictions that could raise ethical concerns.

The rising use of AI across industries subsequently increases the need for AI models that arent subject to unintentional biases, even if this occurs as a by-product of how the models are developed.

Fortunately, there are several ways developers can ensure their AI models are designed as fairly as possible to reduce the potential for unintentional biases. Two of the most effective steps developers can take are:

Adopting a fairness-first mindset

Embedding fairness into every stage of AI development is a crucial step to take when developing ethical AI models. However, fairness principles are not always uniformly applied and can differ depending on the intended use for AI models, creating a challenge for developers.

All AI models should have the same fairness principles at their core. Educating data scientists on the need to build AI models with a fairness-first mindset will lead to significant changes in how the models are designed.

Remaining involved

While one of the benefits of AI is its ability to reduce the pressure on human workers to spend time and energy on smaller, repetitive tasks, and many models are designed to make their own predictions, humans need to remain involved with AI at least in some capacity.

This needs to be factored in throughout the development phase of an AI model and its application within the workplace. In many cases, this may involve the use of shadow AI, where both humans and AI models work on the same task before comparing the results to identify the effectiveness of the AI model.

Alternatively, developers may choose to keep human workers within the operating model of the AI technology, particularly in cases where an AI model doesnt have enough experience, which will let them guide the AI.

The use of AI will likely only continue to increase as organisations across A/NZ, and the world, continue to digitally transform. As such, its becoming increasingly clear that AI developments will need to become even more reliable than they currently are to reduce the potential for unintentional biases and increase user confidence in the technology.

Go here to read the rest:

Why ethics is essential in the creation of artificial intelligence - IT Brief Australia

Posted in Artificial Intelligence | Comments Off on Why ethics is essential in the creation of artificial intelligence – IT Brief Australia

Emory students advance artificial intelligence with a bot that aims to serve humanity – SaportaReport

Posted: August 28, 2021 at 12:12 pm

A team of six Emory computer science students are helping to usher in a new era in artificial intelligence. Theyve developed a chatbot capable of making logical inferences that aims to hold deeper, more nuanced conversations with humans than have previously been possible. Theyve christened their chatbot Emora, because it sounds like a feminine version of Emory and is similar to a Hebrew word for an eloquent sage.

The team is now refining their new approach to conversational AI a logic-based framework for dialogue management that can be scaled to conduct real-life conversations. Their longer-term goal is to use Emora to assist first-year college students, helping them to navigate a new way of life, deal with day-to-day issues and guide them to proper human contacts and other resources when needed.

Eventually, they hope to further refine their chatbot developed during the era of COVID-19 with the philosophy Emora cares for you to assist people dealing with social isolation and other issues, including anxiety and depression.

The Emory team is headed by graduate students Sarah Finch and James Finch, along with faculty advisorJinho Choi, associate professor in the Department of Computer Sciences. The team also includes graduate student Han He and undergraduates Sophy Huang, Daniil Huryn and Mack Hutsell. All the students are members of ChoisNatural Language Processing Research Laboratory.

Were taking advantage of established technology while introducing a new approach in how we combine and execute dialogue management so a computer can make logical inferences while conversing with a human, Sarah Finch says.

We believe that Emora represents a groundbreaking moment for conversational artificial intelligence, Choi adds. The experience that users have with our chatbot will be largely different than chatbots based on traditional, state-machine approaches to AI.

Last year, Choi and Sarah and James Finch headed a team of 14 Emory students that took first place in Amazons Alexa Prize Socialbot Grand Challenge, winning $500,000 for their Emora chatbot. The annual Alexa Prize challenges university students to make breakthroughs in the design of chatbots, also known as socialbots software apps that simplify interactions between humans and computers by allowing them to talk with one another.

This year, they developed a completely new version of Emora with the new team of six students.

They made the bold decision to start from scratch, instead of building on the state-machine platform they developed in 2020 for Emora. We realized there was an upper limit to how far we could push the quality of the system we developed last year, Sarah Finch says. We wanted to do something much more advanced, with the potential to transform the field of artificial intelligence.

They based the current Emora on three types of frameworks to advance core natural language processing technology, computational symbolic structures and probabilistic reasoning for dialogue management.

They worked around the clock, making it into the Alexa Prize finals in June. They did not complete most of the new system, however, until just a few days before they had to submit Emora to the judges for the final round of the competition.

That gave the team no time to make finishing touches to the new system, work out the bugs, and flesh out the range of topics that it could deeply engage in with a human. While they did not win this years Alexa Prize, the strategy led them to develop a system that holds more potential to open new doors of possibilities for AI.

In the run-up to the finals, users of Amazons virtual assistant, known as Alexa, volunteered to test out the competing chatbots, which were not identified by their names or universities. A chatbots success was gauged by user ratings.

The competition is extremely valuable because it gave us access to a high volume of people talking to our bot from all over the world, James Finch says. When we wanted to try something new, we didnt have to wait long to see whether it worked. We immediately got this deluge of feedback so that we could make any needed adjustments. One of the biggest things we learned is that what people really want to talk about is their personal experiences.

Sarah and James Finch, who married in 2019, are the ultimate computer power couple. They met at age 13 in a math class in their hometown of Grand Blanc, Michigan. They were dating by high school, bonding over a shared love of computer programming. As undergraduates at Michigan State University, they worked together on a joint passion for programming computers to speak more naturally with humans.

If we can create more flexible and robust dialogue capability in machines, Sarah Finch explains, a more natural, conversational interface could replace pointing, clicking and hours of learning a new software interface. Everyone would be on a more equal footing because using technology would become easier.

She hopes to pursue a career in enhancing computer dialogue capabilities with private industry after receiving her PhD.

James Finch is most passionate about the intellectual aspects of solving problems and is leaning towards a career in academia after receiving his PhD.

The Alexa Prize deadlines required the couple to work many 60-hour-plus weeks on developing Emoras framework, but they didnt consider it a grind. Ive enjoyed every day, James Finch says. Doing this kind of dialogue research is our dream and were living it. We are making something new that will hopefully be useful to the world.

They chose to come to Emory for graduate school because of Choi, an expert in natural language processing, and Eugene Agichtein, professor in the Department of Computer Science and an expert in information retrieval.

Emora was designed not just to answer questions, but as a social companion.

A caring chatbot was an essential requirement for Choi. At the end of every team meeting, he asks one member to say something about how the others have inspired them. When someone sees a bright side in us, and shares it with others, everyone sees that side and that makes it even brighter, he says.

Chois enthusiasm is also infectious.

Growing up in Seoul, South Korea, he knew by the age of six that he wanted to design robots. I remember telling my mom that I wanted to make a robot that would do homework for me so I could play outside all day, he recalls. It has been my dream ever since. I later realized that it was not the physical robot, but the intelligence behind the robot that really attracted me.

The original Emora was built on a behavioral mathematical model similar to a flowchart and equipped with several natural language processing models. Depending on what people said to the chatbot, the machine made a choice about what path of a conversation to go down. While the system was good at chit chat, the longer a conversation went on, the more chances that the system would miss a social-linguistic nuance and the conversation would go off the rails, diverting from the logical thread.

This year, the Emory team designed Emora so that she could go beyond a script and make logical inferences. Rather than a flowchart, the new system breaks a conversation down into concepts and represents them using a symbolic graph. A logical inference engine allows Emora to connect the graph of an ongoing conversation into other symbolic graphs that represent a bank of knowledge and common sense. The longer the conversations continue, the more its ability to make logical inferences grows.

Sarah and James Finch worked on the engineering of the new Emora system, as well as designing logic structures and implementing related algorithms. Undergraduates Sophy Huang, Daniil Huryn and Mack Hutsell focused on developing dialogue content and conversational scripts for integrating within the chatbot. Graduate student Han He focused on structure parsing, including recent advances in the technology.

A computer cannot deal with ambiguity, it can only deal with structure, Han He explains. Our parser turns the grammar of a sentence into a graph, a structure like a tree, that describes what a chatbot user is saying to the computer.

He is passionate about language. Growing up in a small city in central China, he studied Japanese with the goal of becoming a linguist. His family was low income so he taught himself computer programming and picked up odd programmer jobs to help support himself. In college, he found a new passion in the field of natural language processing, or using computers to process human language.

His linguistic background enhances his technological expertise. When you learn a foreign language, you get new insights into the role of grammar and word order, He says. And those insights can help you to develop better algorithms and programs to teach computers how to understand language. Unfortunately, many people working in natural language processing focus primarily on mathematics without realizing the importance of grammar.

After getting his masters at the University of Houston, He chose to come to Emory for a PhD to work with Choi, who also emphasizes linguistics in his approach to natural language processing. He hopes to make a career in using artificial intelligence as an educational tool that can help give low-income children an equal opportunity to learn.

A love of language also brought senior Mack Hutsell into the fold. A native of Houston, he came to Emorys Oxford College to study English literature. His second love is computer programming and coding. When Hutsell discovered the digital humanities, using computational methods to study literary texts, he decided on a double major in English and computer science.

I enjoy thinking about language, especially language in the context of computers, he says.

Chois Natural Language Processing Lab and the Emora project was a natural fit for him.

Like the other undergraduates on the team, Hutsell did miscellaneous tasks for the project while also creating content that could be injected into Emoras real-world knowledge graph. On the topic of movies, for instance, he started with an IMDB dataset. The team had to combine concepts from possible conversations about the movie data in ways that would fit into the knowledge graph template and generate unique responses from the chatbot. Thinking about how to turn metadata and numbers into something that sounds human is a lot of fun, Hutsell says.

Language was also a key draw for senior Danii Huryn. He was born in Belarus, moved to California with his family when he was four, and then returned to Belarus when he was 10, staying until he completed high school. He speaks English, Belarusian and Russian fluently and is studying German.

In Belarus, I helped translate at my church, he says. That got me thinking about how different languages work differently and that some are better at saying different things.

Huryn excelled in computer programming and astronomy in his studies in Belarus. His interests also include reading science fiction and playing video games. He began his Emory career on the Oxford campus, and eventually decided to major in computer science and minor in physics.

For the Emora project, he developed conversations about technology, including an AI component, and another on how people were adapting to life during the pandemic.

The experience was great, Huryn says. I helped develop features for the bot while I was taking a course in natural language processing. I could see how some of the things I was learning about were coming together into one package to actually work.

Team member Sophy Huang, also a senior, grew up in Shanghai and came to Emory planning to go down a pre-med track. She soon realized, however, that she did not have a strong enough interest in biology and decided on a dual major of applied mathematics and statistics and psychology. Working on the Emora project also taps into her passions for computer programming and developing applications that help people.

Psychology plays a big role in natural language processing, Huang says. Its really about investigating how people think, talk and interact and how those processes can be integrated into a computer.

Food was one of the topics Huang developed for Emora to discuss. The strategy was first to connect with users by showing understanding, she says.

For instance, if someone says pizza is their favorite food, Emora would acknowledge their interest and ask what it is about pizza that they like so much.

By continuously acknowledging and connecting with the user, asking for their opinions and perspectives and sharing her own, Emora shows that she understands and cares, Huang explains. That encourages them to become more engaged and involved in the conversation.

The Emora team members are still at work putting the finishing touches on their chatbot.

We created most of the system that has the capability to do logical thinking, essentially the brain for Emora, Choi says. The brain just doesnt know that much about the world right now and needs more information to make deeper inferences. You can think of it like a toddler. Now were going to focus on teaching the brain so it will be on the level of an adult.

The team is confident that their system works and that they can complete full development and integration to launch beta testing sometime next spring.

Choi is most excited about the potential to use Emora to support first-year college students, answering questions about their day-to-day needs and directing them to the proper human staff or professor as appropriate. For larger issues, such as common conflicts that arise in group projects, Emora could also serve as a starting point by sharing how other students have overcome similar issues.

Choi also has a longer-term vision that the technology underlying Emora may one day be capable of assisting people dealing with loneliness, anxiety or depression. I dont believe that socialbots can ever replace humans as social companions, he says. But I do think there is potential for a socialbot to sympathize with someone who is feeling down, and to encourage them to get help from other people, so that they can get back to the cheerful life that they deserve.

Continued here:

Emory students advance artificial intelligence with a bot that aims to serve humanity - SaportaReport

Posted in Artificial Intelligence | Comments Off on Emory students advance artificial intelligence with a bot that aims to serve humanity – SaportaReport

Frontier Development Lab Transforms Space and Earth Science for NASA with Google Cloud Artificial Intelligence and Machine Learning Technology – SETI…

Posted: at 12:12 pm

August 26, 2021, Mountain View, Calif., Frontier Development Lab (FDL), in partnership with the SETI Institute, NASA and private sector partners including Google Cloud, are transforming space and Earth science through the application of industry-leading artificial intelligence (AI) and machine learning (ML) tools.

FDL tackles knowledge gaps in space science by pairing ML experts with researchers in physics, astronomy, astrobiology, planetary science, space medicine and Earth science.These researchers have utilized Google Cloud compute resources and expertise since 2018, specifically AI / ML technology, to address research challenges in areas like astronaut health, lunar exploration, exoplanets, heliophysics, climate change and disaster response.

With access to compute resources provided by Google Cloud, FDL has been able to increase the typical ML pipeline by more than 700 times in the last five years, facilitating new discoveries and improved understanding of our planet, solar system and the universe. Throughout this period, Google Clouds Office of the CTO (OCTO) has provided ongoing strategic guidance to FDL researchers on how to optimize AI / ML , and how to use compute resources most efficiently.

With Google Clouds investment, recent FDL achievements include:

"Unfettered on-demand access to massive super-compute resources has transformed the FDL program, enabling researchers to address highly complex challenges across a wide range of science domains, advancing new knowledge, new discoveries and improved understandings in previously unimaginable timeframes, said Bill Diamond, president and CEO, SETI Institute.This program, and the extraordinary results it achieves, would not be possible without the resources generously provided by Google Cloud.

When I first met Bill Diamond and James Parr in 2017, they asked me a simple question: What could happen if we marry the best of Silicon Valley and the minds of NASA? said Scott Penberthy, director of Applied AI at Google Cloud. That was an irresistible challenge. We at Google Cloud simply shared some of our AI tricks and tools, one engineer to another, and they ran with it. Im delighted to see what weve been able to accomplish together - and I am inspired for what we can achieve in the future. The possibilities are endless.

FDL leverages AI technologies to push the frontiers of science research and develop new tools to help solve some of humanity's biggest challenges. FDL teams are comprised of doctoral and post-doctoral researchers who use AI / ML to tackle ground-breaking challenges. Cloud-based super-computer resources mean that FDL teams achieve results in eight-week research sprints that would not be possible in even year-long programs with conventional compute capabilities.

High-performance computing is normally constrained due to the large amount of time, limited availability and cost of running AI experiments, said James Parr, director of FDL. Youre always in a queue. Having a common platform to integrate unstructured data and train neural networks in the cloud allows our FDL researchers from different backgrounds to work together on hugely complex problems with enormous data requirements - no matter where they are located.

Better integrating science and ML is the founding rationale and future north star of FDLs partnership with Google Cloud. ML is particularly powerful for space science when paired with a physical understanding of a problem space. The gap between what we know so far and what we collect as data is an exciting frontier for discovery and something AI / ML and cloud technology is poised to transform.

You can learn more about FDLs 2021 program here.

The FDL 2021 showcase presentations can be watched as follows:

In addition to Google Cloud, FDL is supported by partners including Lockheed Martin, Intel, Luxembourg Space Agency, MIT Portugal, Lawrence Berkeley National Lab, USGS, Microsoft, NVIDIA, Mayo Clinic, Planet and IBM.

About the SETI InstituteFounded in 1984, the SETI Institute is a non-profit, multidisciplinary research and education organization whose mission is to lead humanity's quest to understand the origins and prevalence of life and intelligence in the universe and share that knowledge with the world. Our research encompasses the physical and biological sciences and leverages expertise in data analytics, machine learning and advanced signal detection technologies. The SETI Institute is a distinguished research partner for industry, academia and government agencies, including NASA and NSF.

Contact Information:Rebecca McDonaldDirector of CommunicationsSETI Institutermcdonald@SETI.org

DOWNLOAD FULL PRESS RELEASE HERE.

Visit link:

Frontier Development Lab Transforms Space and Earth Science for NASA with Google Cloud Artificial Intelligence and Machine Learning Technology - SETI...

Posted in Artificial Intelligence | Comments Off on Frontier Development Lab Transforms Space and Earth Science for NASA with Google Cloud Artificial Intelligence and Machine Learning Technology – SETI…

Embedding Gender in International Humanitarian Law: Is Artificial Intelligence Up to the Task? – Just Security

Posted: at 12:12 pm

During armed conflict, unequal power relations and structural disadvantages derived from gender dynamics are exacerbated. There has been increased recognition of these dynamics during the last several decades, particularly in the context of sexual and gender-based violence in conflict, as exemplified for example in United Nations Security Council Resolution 1325 on Women, Peace, and Security. Though initiatives like this resolution are a positive advancement towards the recognition of discrimination against women and structural disadvantages that they suffer from during armed conflict, other aspects of armed conflict, including, notably, the use of artificial intelligence (AI) for targeting purposes, have remained resistant to insights related to gender. This is particularly problematic in the operational aspect of international humanitarian law (IHL), which contains rules on targeting in armed conflict.

The Gender Dimensions of Distinction and Proportionality

Some gendered dimensions of the application of IHL have long been recognized, especially in the context of rape and other categories of sexual violence against women occurring during armed conflict. Therefore, a great deal of attention has been paid in relation to ensuring accountability for crimes of sexual violence during times of armed conflict, while other aspects of conflict, such as the operational aspect of IHL, have remained overlooked.

In applying the principle of distinction, which requires distinguishing civilians from combatants (only the latter of which may be the target of a lawful attack), gendered assumptions of who is a threat have often played an important role. In modern warfare, often characterized by asymmetry and urban conflict and where combatants can blend in with the civilian population, some militaries and armed groups have struggled to reliably distinguish civilians. Due to gendered stereotypes of expected behavior of women and men, gender has operated as a de facto qualified identity that supplements the category of civilian. In practice this can mean that, for women to be targeted, IHL requirements are rigorously applied. Yet, in the case of young civilian males, the bar seems to be lower gender considerations, coupled with other factors such as geographical location, expose them to a greater risk of being targeted.

An illustrative example of this application of the principle of distinction is in so-called signature strikes, a subset of drone strikes adopted by the United States outside what it considers to be areas of active hostilities. Signature strikes target persons who are not on traditional battlefields without individually identifying them, but rather based only on patterns of life. According to reports on these strikes, it is sufficient that the persons targeted fit into the category military-aged males, who live in regions where terrorists operate, and whose behavior is assessed to be similar enough to those of terrorists to mark them for death. However, as the organization Article 36 notes, due to the lack of transparency around the use of armed drones in signature strikes, it is difficult to determine in more detail what standards are used by the U.S. government to classify certain individuals as legal targets. According to a New York Times report from May 2012, in counting casualties from armed drone strikes, the U.S. government reportedly recorded all military-age males in a strike zone as combatants [] unless there is explicit intelligence posthumously proving them innocent.

However, once a target is assessed as a valid military objective, the impact of gender is reversed in conducting a proportionality assessment. The principle of proportionality requires ensuring the anticipated harm to civilians and civilian objects is not excessive compared to the anticipated military advantage of an attack. But in assessing the anticipated advantage and anticipated civilian harms, the calculated military advantage can include the expected reduction of the commanders own combatant casualties as an advantage in other words, the actual loss of civilian lives can be offset by the avoidance of prospective military casualties. This creates the de facto result that the lives of combatants, the vast majority of whom are men, are weighed as more important than those of civilians who in a battlefield context, are often disproportionately women. Taking these applications of IHL into account, we can conclude that a gendered dimension is present in the operational aspect of this branch of law.

AI Application of IHL Principles

New technologies, particularly AI, have been increasingly deployed to assist commanders in their targeting decisions. Specifically, machine-learning algorithms are being used to process massive amounts of data to identify rules or patterns, drawing conclusions about individual pieces of information based on these patterns. In warfare, AI already supports targeting decisions in various forms. For instance, AI algorithms can estimate collateral damage, thereby helping commanders undertake the proportionality analysis. Likewise, some drones have been outfitted with AI to conduct image-recognition and are currently being trained to scan urban environments to find hidden attackers in other words, to distinguish between civilians and combatants as required by the principle of distinction.

Indeed, in modern warfare, the use of AI is expanding. For example, in March 2021 the National Security Commission on AI, a U.S. congressionally-mandated commission, released a report highlighting how, in the future, AI-enabled technologies are going to permeate every facet of warfighting. It also urged the Department of Defense to integrate AI into critical functions and existing systems in order to become an AI-ready force by 2025. As Neil Davison and Jonathan Horowitz note, as the use of AI grows, it is crucial to ensure that its development and deployment (especially when coupled with the use of autonomous weapons) complies with civilian protection.

Yet even if IHL principles can be translated faithfully into the programming of AI-assisted military technologies (a big and doubtful if), such translation will reproduce or even magnify the disparate, gendered impacts of IHL application identified previously. As the case of drones used to undertake signature strikes demonstrates, the integration of new technologies in warfare risks importing, and in the case of AI tech, potentially magnifying and cementing, the gendered injustices already embodied in the application of existing law.

Gendering Artificial Intelligence-Assisted Warfare

There are several reasons that AI may end up reifying and magnifying gender inequities. First, the algorithms are only as good as their inputs and those underlying data are problematic. To properly work, AI needs massive amounts of data. However, neither the collection nor selection of these data are neutral. In less deadly application domains, such as in mortgage loan decisions or predictive policing, there have been demonstrated instances of gender (and other) biases of both the programmers and the individuals tasked with classifying data samples, or even the data sets themselves (which often contain more data on white, male subjects).

Perhaps even more difficult to identify and correct than individuals biases are instances of machine learning that replicate and reinforce historical patterns of injustice merely because those patterns appear, to the AI, to provide useful information rather than undesirable noise. As Noel Sharkey notes, the societal push towards greater fairness and justice is being held back by historical values about poverty, gender and ethnicity that are ossified in big data. There is no reason to believe that bias in targeting data would be any different or any easier to find.

This means that historical human biases can and do lead to incomplete or unrepresentative training data. For example, a predictive algorithm used to apply the principle of distinction on the basis of target profiles, together with other intelligence, surveillance, and reconnaissance tools, will be gender biased if the data inserted equate military-aged men with combatants and disregard other factors. As the practice of signature drone strikes has demonstrated, automatically classifying men as combatants and women as vulnerable has led to mistakes in targeting. As the use of machine learning in targeting expands, these biases will be amplified if not corrected for with each strike providing increasingly biased data.

To mitigate this result, it is critical to ensure that the data collected are diverse, accurate, and disaggregated, and that algorithm designers reflect on how the principles of distinction and proportionality can be applied in gender-biased ways. High quality data collection means, among other things, ensuring that the data are disaggregated by gender otherwise it will be impossible to learn what biases are operating behind the assumptions used, what works to counter those biases, and what does not.

Ensuring high quality data also requires collecting more and different types of data, including data on women. In addition, because AI tools reflect the biases of those who build them, ensuring that female employees hold technical roles and that male employees are fully trained to understand gender and other biases is also crucial to mitigate data biases. Incorporating gender advisors would also be a positive step to ensure that the design of the algorithm, and the interpretation of what the algorithm recommends or suggests, considers gender biases and dynamics.

However, issues of data quality are subsidiary to larger questions about the possibility of translating IHL into code and, even if this translation is possible, the further difficulty of incorporating gender considerations into IHL code. Encoding gender considerations into AI is challenging to say the least, because gender is both a societal and individual construction. Likewise, the process of developing AI is not neutral, as it has both politics and ethics embedded, as demonstrated by documented incidents of AI encoding biases. Finally, the very rules and principles of modern IHL were drafted when structural discrimination against women was not acknowledged or was viewed as natural or beneficial. As a result, when considering how to translate IHL into code, it is essential to incorporate critical gender perspectives into the interpretation of the norms and laws related to armed conflict.

Gendering IHL: An Early Attempt and Work to be Done

An example of the kind of critical engagement with IHL that will be required is provided by the updated International Committee of the Red Cross (ICRC) Commentary on the Third Geneva Convention. Through the incorporation of particular considerations of gender-specific risks and needs (para. 1747), the updated commentary has reconsidered outdated baseline gender assumptions, such as the idea that women have non-combatant status by default, or that women must receive special consideration because they have less resilience, agency or capacity (para. 1682). This shift has demonstrated that it is not only desirable, but also possible to include a gender perspective in the interpretation of the rules of warfare. This shift also underscores the urgent need to revisit IHL targeting principles of distinction and proportionality to assess how their application impacts genders differently, so that any algorithms developed to execute IHL principles incorporate these insights from the start.

As a first cut at this reexamination, it is essential to reassert that principles of non-discrimination also apply to IHL, and must be incorporated into any algorithmic version of these rules. In particular, the principle of distinction allows commanders to lawfully target only those identified as combatants or those who directly participate in hostilities. Article 50 of Additional Protocol I to the Geneva Conventions defines civilians in a negative way, meaning that civilians are those who do not belong to the category of combatants and IHL makes no reference to gender as a signifier of identity for the purpose of assessing whether a given individual is a combatant. In this regard, being a military-aged male cannot be a shortcut to the identification of combatants. Men make up the category of civilians as well. As Maya Brehm notes, there is scope for categorical targeting within a conduct of hostilities framework, but the principle of non-discrimination continues to apply in armed conflict. Adverse distinction based on race, sex, religion, national origin or similar criteria is prohibited.

Likewise, in any attempt to translate the principle of proportionality into code, there must be recognition of and correction for the gendered impacts of current proportionality calculations. For example, across Syria between 2011 and 2016, 75 percent of the civilian women killed in conflict-related violence were killed by shelling or aerial bombardment. In contrast, 49 percent of civilian men killed in war-related violence were killed by shelling or aerial bombardment; men were killed more often by shooting. This suggests that particular tactics and weapons have disparate impacts on civilian populations that break down along gendered lines. The studys authors note that the evolving tactics used by Syrian, opposition, and international forces in the conflict contributed to a decrease in the proportion of casualties who were combatants, as the use of shelling and bombardment two weapons that were shown to have high rates of civilian casualties, especially women and children civilian casualties increased over time. Study authors also note, however, that changing patterns of civilian and combatant behavior may partially explain the increasing rates of women compared to men in civilian casualties: A possible contributor to increasing proportions of women and children among civilian deaths could be that numbers of civilian men in the population decreased over time as some took up arms to become combatants.

As currently understood, IHL does not require an analysis of the gendered impacts of, for example, the choice of aerial bombardment versus shooting. Yet this research suggests that selecting aerial bombardment as a tactic will result in more civilian women than men being killed (nearly 37 percent of women killed in the conflict versus 23 percent of men). Selecting shooting as a tactic produces opposite results, with 23 percent of civilian men killed by shooting compared to 13 percent of women. There is no right proportion of civilian men and women killed by a given tactic, but these disparities have profound, real-world consequences for civilian populations during and after conflict that are simply not considered under current rules of proportionality and distinction.

In this regard, although using force protection to limit ones own forces casualties is not forbidden, such strategy ought to consider the effect that this policy will have on the civilian population of the opposing side including gendered impacts. The compilation of data on how a certain means or method of warfare may impact the civilian population would enable commanders to take a more informed decision. Acknowledging that the effects of weapons in warfare are gendered is the first key step to be taken. In some cases, there has been progress in incorporating a gendered lens into positive IHL, as in the case of cluster munitions, where Article 5 of the convention banning these weapons notes that States shall provide gender-sensitive assistance to victims. But most of this analysis remains rudimentary and not clearly required. In the context of developing AI-assisted technologies, reflecting on the gendered impact of the algorithm is essential during AI development, acquisition, and application.

The process of encoding IHL principles of distinction and proportionality into AI systems provides a useful opportunity to revisit application of these principles with an eye toward interpretations that take into account modern gender perspectives both in terms of how such IHL principles are interpreted and how their application impacts men and women differently. As the recent update of the ICRC Commentary on the Third Geneva Convention illustrates, acknowledging and incorporating gender-specific needs in the interpretation and suggested application of the existing rules of warfare is not only possible, but also desirable.

Disclaimer:This post has been prepared as part of a research internship at theErasmus University Rotterdam, funded by the European Union (EU) Non-Proliferation and Disarmament Consortium as part of a larger EU educationalinitiative aimed at building capacity in the next generation of scholars and practitioners innon-proliferation policy and programming. The views expressed in this post are those of theauthor and do not necessarily reflect those of the Erasmus University Rotterdam, the EU Non-Proliferation andDisarmament Consortium or other members of the network.

Originally posted here:

Embedding Gender in International Humanitarian Law: Is Artificial Intelligence Up to the Task? - Just Security

Posted in Artificial Intelligence | Comments Off on Embedding Gender in International Humanitarian Law: Is Artificial Intelligence Up to the Task? – Just Security

Valued to be $4.9 Billion by 2026, Artificial Intelligence (AI) in Oil & Gas Slated for Robust Growth Worldwide – thepress.net

Posted: at 12:12 pm

Country

United States of AmericaUS Virgin IslandsUnited States Minor Outlying IslandsCanadaMexico, United Mexican StatesBahamas, Commonwealth of theCuba, Republic ofDominican RepublicHaiti, Republic ofJamaicaAfghanistanAlbania, People's Socialist Republic ofAlgeria, People's Democratic Republic ofAmerican SamoaAndorra, Principality ofAngola, Republic ofAnguillaAntarctica (the territory South of 60 deg S)Antigua and BarbudaArgentina, Argentine RepublicArmeniaArubaAustralia, Commonwealth ofAustria, Republic ofAzerbaijan, Republic ofBahrain, Kingdom ofBangladesh, People's Republic ofBarbadosBelarusBelgium, Kingdom ofBelizeBenin, People's Republic ofBermudaBhutan, Kingdom ofBolivia, Republic ofBosnia and HerzegovinaBotswana, Republic ofBouvet Island (Bouvetoya)Brazil, Federative Republic ofBritish Indian Ocean Territory (Chagos Archipelago)British Virgin IslandsBrunei DarussalamBulgaria, People's Republic ofBurkina FasoBurundi, Republic ofCambodia, Kingdom ofCameroon, United Republic ofCape Verde, Republic ofCayman IslandsCentral African RepublicChad, Republic ofChile, Republic ofChina, People's Republic ofChristmas IslandCocos (Keeling) IslandsColombia, Republic ofComoros, Union of theCongo, Democratic Republic ofCongo, People's Republic ofCook IslandsCosta Rica, Republic ofCote D'Ivoire, Ivory Coast, Republic of theCyprus, Republic ofCzech RepublicDenmark, Kingdom ofDjibouti, Republic ofDominica, Commonwealth ofEcuador, Republic ofEgypt, Arab Republic ofEl Salvador, Republic ofEquatorial Guinea, Republic ofEritreaEstoniaEthiopiaFaeroe IslandsFalkland Islands (Malvinas)Fiji, Republic of the Fiji IslandsFinland, Republic ofFrance, French RepublicFrench GuianaFrench PolynesiaFrench Southern TerritoriesGabon, Gabonese RepublicGambia, Republic of theGeorgiaGermanyGhana, Republic ofGibraltarGreece, Hellenic RepublicGreenlandGrenadaGuadaloupeGuamGuatemala, Republic ofGuinea, RevolutionaryPeople's Rep'c ofGuinea-Bissau, Republic ofGuyana, Republic ofHeard and McDonald IslandsHoly See (Vatican City State)Honduras, Republic ofHong Kong, Special Administrative Region of ChinaHrvatska (Croatia)Hungary, Hungarian People's RepublicIceland, Republic ofIndia, Republic ofIndonesia, Republic ofIran, Islamic Republic ofIraq, Republic ofIrelandIsrael, State ofItaly, Italian RepublicJapanJordan, Hashemite Kingdom ofKazakhstan, Republic ofKenya, Republic ofKiribati, Republic ofKorea, Democratic People's Republic ofKorea, Republic ofKuwait, State ofKyrgyz RepublicLao People's Democratic RepublicLatviaLebanon, Lebanese RepublicLesotho, Kingdom ofLiberia, Republic ofLibyan Arab JamahiriyaLiechtenstein, Principality ofLithuaniaLuxembourg, Grand Duchy ofMacao, Special Administrative Region of ChinaMacedonia, the former Yugoslav Republic ofMadagascar, Republic ofMalawi, Republic ofMalaysiaMaldives, Republic ofMali, Republic ofMalta, Republic ofMarshall IslandsMartiniqueMauritania, Islamic Republic ofMauritiusMayotteMicronesia, Federated States ofMoldova, Republic ofMonaco, Principality ofMongolia, Mongolian People's RepublicMontserratMorocco, Kingdom ofMozambique, People's Republic ofMyanmarNamibiaNauru, Republic ofNepal, Kingdom ofNetherlands AntillesNetherlands, Kingdom of theNew CaledoniaNew ZealandNicaragua, Republic ofNiger, Republic of theNigeria, Federal Republic ofNiue, Republic ofNorfolk IslandNorthern Mariana IslandsNorway, Kingdom ofOman, Sultanate ofPakistan, Islamic Republic ofPalauPalestinian Territory, OccupiedPanama, Republic ofPapua New GuineaParaguay, Republic ofPeru, Republic ofPhilippines, Republic of thePitcairn IslandPoland, Polish People's RepublicPortugal, Portuguese RepublicPuerto RicoQatar, State ofReunionRomania, Socialist Republic ofRussian FederationRwanda, Rwandese RepublicSamoa, Independent State ofSan Marino, Republic ofSao Tome and Principe, Democratic Republic ofSaudi Arabia, Kingdom ofSenegal, Republic ofSerbia and MontenegroSeychelles, Republic ofSierra Leone, Republic ofSingapore, Republic ofSlovakia (Slovak Republic)SloveniaSolomon IslandsSomalia, Somali RepublicSouth Africa, Republic ofSouth Georgia and the South Sandwich IslandsSpain, Spanish StateSri Lanka, Democratic Socialist Republic ofSt. HelenaSt. Kitts and NevisSt. LuciaSt. Pierre and MiquelonSt. Vincent and the GrenadinesSudan, Democratic Republic of theSuriname, Republic ofSvalbard & Jan Mayen IslandsSwaziland, Kingdom ofSweden, Kingdom ofSwitzerland, Swiss ConfederationSyrian Arab RepublicTaiwan, Province of ChinaTajikistanTanzania, United Republic ofThailand, Kingdom ofTimor-Leste, Democratic Republic ofTogo, Togolese RepublicTokelau (Tokelau Islands)Tonga, Kingdom ofTrinidad and Tobago, Republic ofTunisia, Republic ofTurkey, Republic ofTurkmenistanTurks and Caicos IslandsTuvaluUganda, Republic ofUkraineUnited Arab EmiratesUnited Kingdom of Great Britain & N. IrelandUruguay, Eastern Republic ofUzbekistanVanuatuVenezuela, Bolivarian Republic ofViet Nam, Socialist Republic ofWallis and Futuna IslandsWestern SaharaYemenZambia, Republic ofZimbabwe

Read more here:

Valued to be $4.9 Billion by 2026, Artificial Intelligence (AI) in Oil & Gas Slated for Robust Growth Worldwide - thepress.net

Posted in Artificial Intelligence | Comments Off on Valued to be $4.9 Billion by 2026, Artificial Intelligence (AI) in Oil & Gas Slated for Robust Growth Worldwide – thepress.net

Artificial Intelligence as the core of logistics operation – Entrepreneur

Posted: at 12:12 pm

ADA is the assistant that operates as Artificial Intelligence on the SimpliRoute platform. It helps solve about 25 tasks and is based on machine learning.

Submit your email below to get an exclusive glimpse of Chapter 3: Good Idea! How Do I Know If I Have a Great Idea for a Business?

August25, 20213 min read

For more technology and data that one integrates into a software, in the end always experience and learning are the fundamental pillars. The important thing is to understand how to extract them intelligently . With that phrase, lvaro Echeverra, co-founder and CEO of SimpliRoute, recalls the need that shaped the idea of creating an AI virtual assistant to optimize its logistics platform.

The startup is dedicated to optimizing routes for dispatch vehicles. The problem, according to Echeverra, was that despite the fact that logarithms and data science effectively optimize logistics a lot, there are things that no default software can evaluate, such as whether a street is in poor condition, whether it is too narrow for a truck. or if it is unsafe at a certain time. This valuable information is held by the drivers .

This premise led us to think of intelligence as the core of the operation, capable of learning from the behavior of the drivers who use the platform. Today, after more than a year of development, this has resulted in ADA, the first AI Virtual Assistant developed 100% in-house and integrated into a logistics platform, such as the popular Siri on Apple devices.

Photo: SimpleRoute

ADA has been fully integrated into SimpliRoute for a few months, and its mission is to send alerts and suggestions to drivers of companies that use the platform, in addition to collecting learning to reschedule future actions and thus further optimize routes. For example, based on learning, the AI recommends which driver should use which vehicle based on the performance of each one on historic routes; whether the company should change its fleet size based on historical utilization; o suggest optimized time windows when dispatching; among other tasks.

For us it is a big step to implement our own AI that works as a nuclear intelligence that collects the real experience in the street. Our focus as a Chilean scaleup is to be at the technological forefront in the world, and we will only achieve this by constantly improving our integration with artificial intelligence and machine learning , says the CEO of Simpliroute. .

Currently, the AI is already working together with the drivers on the new version of the app. And while for now it issues alerts and works in the background, it is expected that users will soon be able to interact directly with the AI to request information or advice.

Go here to read the rest:

Artificial Intelligence as the core of logistics operation - Entrepreneur

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence as the core of logistics operation – Entrepreneur

Is This CEO Real or Fake? How Artificial Intelligence Is Taking Over the Event Industry – BizBash

Posted: at 12:12 pm

NVIDIAs CEO Jensen Huang had been addressing a virtual audience during his keynote at the GTC21 event like he had done beforefrom his kitchen.

But this time around, the kitchen keynote (as it had been dubbed) got a bit sci-fi. For 14 seconds of the 1-hour-and-48-minute presentation, Huang wasnt quite himself. Instead, a photorealistic digital clone of the CEO (and his kitchen) popped up on screenand no one knew.

The GPU Technology Conference (GTC), which took place online this spring from April 12-16, showcases the latest in artificial intelligence, accelerated computing, intelligent networking, game development and more. So it makes sense that the company would show off its tech prowess there. Based in Santa Clara, Calif., NVIDIA designs graphics processing units for various industries, including gaming and automotive.

To create the virtual version of Huang, a full face and body scan was done to create a 3D model, then AI was trained to mimic his gestures and expressions, all via the companys Omniverse technology, a multidisciplinary collaboration tool for creating 3D virtual spaces. Unlike the common approach of creating a 3D digital replica of a real person where you scan and capture as much data of that person as possible, we set a very difficult goal of replicating Jensen's behavior and performance without much data of him, explained David Wright, VP and executive creative director of NVIDIA.

Wright explained that the concept began around February, with the final version of Huang being built, using a voice recording of his keynote, roughly a week before the event.

The use of virtual stand-ins at digital events isnt necessarily new, and this past year has seen a sharp increase in the development and implementation of avatar-based platforms. But those characters dont really look like you or me.

But what if they could?

Founded in 2017, U.K.-based Synthesia set out to make it easier to create synthetic video content. Its now the world's largest platform for AI video generation, boasting the creation of six million videos to date.

Just like Photoshop completely changed how we work with photos, keyboards and computers completely changed how we work with text from pen and paper, of course, and in music, synthesizers and software have also completely changed how we create songs today, explained Victor Riparbelli, CEO and co-founder of Synthesia, about the technologys impact on video production.

Interestingly, in order to create a video in Synthesia, you need to check the captcha box that states Im not a robot. Victor Riparbelli, the company's CEO and co-founder, said that the company continues to work on avatar realism, making our avatars come to life more. You can add emotions to them, make them smile, make them sad, make them happy, make them nod their heads. Watch the video we made.Screenshot: Courtesy of SynthesiaTo create an AI-generated video on Synthesia, users either select an existing avatar image or design a custom one by submitting three to four minutes of video footage and a script thats used to build talking head-style videos.

Primarily used by companies for training, learning and marketing and sales purposes, Synthesias API can be used to create personalized event invites, video chat bots or virtual facilitators, interactive videos and interstitial videos during conferences.

We're working on making experiences that today are text-driven and making them video-driven, Riparbelli said. For example, a warehouse worker in a big tech company consuming their training as a two-minute video versus a five-page PDF is a much better experience.

One of Synthesia's clients, EY (formerly known as Ernst & Young) uses AI avatars, not as a replacement for taking real meetings, but after they've had a call, instead of sending an email, they can now send the video, he said. While Volkswagen trains teams at its car dealerships around the world. The software is able to translate text into 55 languages, which is key since the company works with many global companies that need to communicate to remote team members across borders.

The company is also currently working with a conference producer to create AI-generated content for upcoming in-person events, using interactive videos at kiosks to help navigate attendees throughout the space. Riparbelli also explained that the technology could be used to easily insert different data points such as location or industry into sponsored messages, similar to auto-generated email formats.

I think there's never been a bigger need among people to consume information by video, Riparbelli said. I think businesses very much realized that if they communicate by text it's just not as effective. They want to communicate by video because they want to increase engagement. They want to increase conversion rates. They want to increase information retention. And video is just the natural way to do that. But he noted that the costs and lengthy production process of shooting IRL videos make it prohibitive and unfeasible for most companies.

According to company research, Riparbelli said that nine out of 10 people dont realize they're watching a synthetic videoprobably because theyre not looking for it.

This brings up the question of the ethical use of such content. Several years ago, AI-generated imagery, commonly known as deepfakes, of Hollywood actors presented in compromising positions made headlines, which raised concerns over the potential dangers of this type of content. Riparbelli explained that Synthesia has safeguards in place to prevent users from abusing the platform. That includes requesting consent when creating custom avatars.

Despite the possible pitfalls, both Wright and Riparbelli emphasized the desire to make the technology easier to use.

Regarding the future, we do not pause. We are always pushing the boundaries of what is possible today and creating something new, Wright said. We want to make it easier and faster for anyone to create digital characters. We will always be working on virtual humans, virtual avatars and the like, and we will continue to bridge the experience between the physical and virtual worlds closer together.

Read more from the original source:

Is This CEO Real or Fake? How Artificial Intelligence Is Taking Over the Event Industry - BizBash

Posted in Artificial Intelligence | Comments Off on Is This CEO Real or Fake? How Artificial Intelligence Is Taking Over the Event Industry – BizBash

Artificial Intelligence in Construction Market Estimated to Generate a Revenue of $2642.4 Million by 2026, Growing – GlobeNewswire

Posted: at 12:12 pm

New York, USA, Aug. 25, 2021 (GLOBE NEWSWIRE) -- According to a report published by Research Dive, artificial intelligence in construction market is expected to generate a revenue of $2,642.4 million, growing at a CAGR of 26.3% during the forecast period (2019-2026). The inclusive report provides a brief overview of the current scenario of the market including significant aspects of the market such as growth factors, challenges, restraints and various opportunities during the forecast period. The report also provides all the market figures making it easier and helpful for the new participants to understand the market.

Download FREE Sample Report of the Global Artificial Intelligence in Construction Market: https://www.researchdive.com/download-sample/46

Dynamics of the Market

Drivers: The application of artificial intelligence does not only provide a great deal of efficacy and productivity in various construction processes, but it also reduces the overall time required to complete any given task. Moreover, companies can save a lot of money by adopting AI in their construction processes. These factors are expected to drive the growth of the market during the forecast period.

Restraints: Lack in availability of skilled and knowledgeable professionals is expected to impede the growth of the market during the forecast period.

Opportunities: Persistent technological advancements in AI and IOTs are expected to create vital opportunities for the growth of the market during the forecast period.

Check out How COVID-19 impacts the Global Artificial Intelligence in Construction Market: https://www.researchdive.com/connect-to-analyst/46

Segments of the Market

The report has divided the market into different segments based on application and region.

Application: Planning and Design Sub-segment to be Most Profitable

The planning and design sub-segment are expected to grow exponentially with a CAGR of 28.9% during the forecast period. Massive amount of money is being invested in the planning, designing, research, architecture and so on for the construction of buildings, especially with the help of artificial intelligence. This factor is expected to bolster the growth of the sub-segment during the forecast period.

Check out all Information and communication technology & media Industry Reports: https://www.researchdive.com/information-and-communication-technology-and-media

Region: Europe Anticipated to have the Highest Growth Rate

European AI in construction market is expected to grow exponentially in the coming years with a CAGR of 26.7% during the forecast period. The adoption of Industry 4.0, eased governmental regulations and advancements in internet of things (IOT) are expected to fuel the growth of the market during the forecast period.

Access Varied Market Reports Bearing Extensive Analysis of the Market Situation, Updated With The Impact of COVID-19: https://www.researchdive.com/covid-19-insights

Key Players of the Market

Autodesk, Inc., Building System Planning, Inc. Smartvid.io, Inc. Komatsu Ltd NVIDIA Corporation Doxel Inc. Volvo AB Dassault Systemes SE

For instance, in May 2021, Procore Technologies Inc., a leading provider of construction management software, acquired INDUS.AI, an advanced AI construction platform, to add computer vision abilities to the Procore platform in order to maximize its efficiency and future profitability.

The report also summarizes many important aspects including financial performance of the key players, SWOT analysis, product portfolio, and latest strategic developments.Click Here to Get Absolute Top Companies Development Strategies Summary Report.

TRENDING REPORTS WITH COVID-19 IMPACT ANALYSIS

Point of Sale Software Market: https://www.researchdive.com/8423/point-of-sale-software-market

Quantum Computing Market: https://www.researchdive.com/8332/quantum-computing-market

Payment Processing Solutions Market: https://www.researchdive.com/416/payment-processing-solutions-market

Here is the original post:

Artificial Intelligence in Construction Market Estimated to Generate a Revenue of $2642.4 Million by 2026, Growing - GlobeNewswire

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence in Construction Market Estimated to Generate a Revenue of $2642.4 Million by 2026, Growing – GlobeNewswire

Page 81«..1020..80818283..90100..»