Daily Archives: May 15, 2022

How is AI improving aircraft inspections after hard landings? – FreightWaves

Posted: May 15, 2022 at 10:12 pm

Hard landings can be difficult on passengers, cargo and the aircraft itself. Aviation safety practices require airlines need to check the landing gear before the plane returns to service. ATR, the Airbus-Leonardo joint venture that manufactures turboprop aircraft, says it has developed AI that can significantly speed up the inspection process and save airlines money.

Bumpy landings typically happen when an aircraft descends at a faster rate than the optimal 2 or 3 feet per second. The idea is to gradually lower the plane without letting it float. Pilot error can cause hard landings, but pilots frequently make them on purpose because of wet weather conditions, wind gusts, or short or busy runways. Pilots prefer to call these landings firm.

Bouncing on the tarmac with high force causes structural stress on landing gear components, which is why manufacturers call for follow-up inspections. But defining what constitutes a firm landing and when an aircraft inspection is required isnt so simple.

Boeing says accelerations recorded on flight data recorders are an inaccurate indicator of hard landings and that using accelerometers to measure G forces is unreliable and impractical.

Boeing believes pilot judgment and reports describing the landing remain the best source of information for ascertaining if a hard landing has occurred. Pilots ordinarily land the airplane well within the allowable limits and become accustomed to the sensation . Airplane flight and cabin staff typically report a hard landing when sink rates approach 4 feet per second. All Boeing model airplanes have been designed for a 10 feet per second sink rate at the maximum designed landing weight and six feet per second at the maximum designed takeoff weight. These values are considered when designing both the main landing gear and nose landing gear assemblies as well as the wing and fuselage support structure, states an explanation on a Boeing 737 technical website.

The hard landing inspection involves a close visual inspection of various structural components to determine if further inspections are warranted. One possible sign of damage is leakage of hydraulic fluid from the shock strut. A second inspection could involve removing parts of the landing gear.

ATR, in conjunction with aviation equipment maker Safran, said last week it has developed Smart Lander, a landing gear diagnostics service that uses sophisticated data analysis to make safety determinations faster. The service relies on machine learning technology based on hundreds of thousands of hard landing simulations to issue recommendations to operations on the maintenance actions to be taken according to the hardness of the landing and the load level sustained by the landing gear.

Smart Lander helps operators determine whether aircraft can be permitted to continue commercial operations or need to be sent to a maintenance base. The process takes less than an hour, compared to more than a week previously, ATR said.

Our former process could take up to 10 to 20 working days. It required analyses from both the ATR Design Office and Safran Landing Systems to decide whether the aircraft was fit to return to service, said David Brigante, ATR senior vice president customer support and services, in a news release. With Smart Lander, we will be able to massively reduce our response times, therefore boosting aircraft availability, reducing costs for customers and enhancing customer satisfaction, while maintaining the same level of analysis quality.

ATSG contract

Last month, Air Transport Services Group in Wilmington, Ohio, selected Safran Landing Systems for the retrofit of more than 30 Boeing 767 freighters operated by its subsidiary cargo airlines. The transition to Safrans wheels and carbon brakes allows ATSG to operate with a common wheel and brake configuration across its operating fleet of freighter aircraft with carbon brakes, which are lighter than other brake types.

Click here for more American Shipper/FreightWaves stories by Eric Kulisch.

RELATED NEWS:

What is a Taxibot?

Read more here:

How is AI improving aircraft inspections after hard landings? - FreightWaves

Posted in Ai | Comments Off on How is AI improving aircraft inspections after hard landings? – FreightWaves

How 10 Skin Tones Will Reshape Googles Approach to AI – WIRED

Posted: at 10:12 pm

For years, tech companies have relied on something called the Fitzpatrick scale to classify skin tones for their computer vision algorithms. Originally designed for dermatologists in the 1970s, the system comprises only six skin tones, a possible contributor to AIs well-documented failures in identifying people of color. Now Google is beginning to incorporate a 10-skin tone standard across its products, called the Monk Skin Tone (MST) scale, from Google Search Images to Google Photos and beyond. The development has the potential to reduce bias in data sets used to train AI in everything from health care to content moderation.

Google first signaled plans to go beyond the Fitzpatrick scale last year. Internally, the project dates back to a summer 2020 effort by four Black women at Google to make AI work better for people of color, according to a Twitter thread from Xango Eye, a responsible AI product manager at the company. At todays Google I/O conference, the company detailed how wide an impact the new system could have across its many products. Google will also open source the MST, meaning it could replace Fitzpatrick as the industry standard for evaluating the fairness of cameras and computer vision systems.

Think anywhere there are images of peoples faces being used where we need to test the algorithm for fairness, says Eye.

The Monk Skin Tone scale is named after Ellis Monk, a Harvard University sociologist who has spent decades researching colorisms impact on the lives of Black people in the United States. Monk created the scale in 2019 and worked with Google engineers and researchers to incorporate it into the companys product development.

The reality is that life chances, opportunities, all these things are very much tied to your phenotypical makeup, Monk said in prepared remarks in a video shown at I/O. We can weed out these biases in our technology from a really early stage and make sure the technology we have works equally well across all skin tones. I think this is a huge step forward.

An initial analysis by Monk and Google research scientists last year found that participants felt better represented by the MST than by the Fitzpatrick scale. In an FAQ published Wednesday, Google says that having more than 10 skin tones can add complexity without extra value, unlike industries like makeup, where companies like Rihannas Fenty Beauty offer more than 40 shades. Google is continuing work to validate the Monk Skin Tone scale in places like Brazil, India, Mexico, and Nigeria, according to a source familiar with the matter. Further details are expected soon in an academic research article.

The company will now expand its use of the MST. Google Images will offer an option to sort makeup-related search results by skin tone based on the scale, and filters for people with more melanin are coming to Google Photos later this month. Should Google adopt the 10-skin-tone scale across its product lines, it could have implications for fairly evaluating algorithms used in Google search results, Pixel smartphones, YouTube classification algorithms, Waymo self-driving cars, and more.

Link:

How 10 Skin Tones Will Reshape Googles Approach to AI - WIRED

Posted in Ai | Comments Off on How 10 Skin Tones Will Reshape Googles Approach to AI – WIRED

How visual-based AI is evolving across industries – Free Press Journal

Posted: at 10:12 pm

Artificial Intelligence is transforming the business world as a whole with all its applications and potential, with visual-based AI being capable of digital images and videos.

Visual-based AI, which refers to computer vision, is an application of AI that is playing a significant role in enabling a digital transformation by enabling machines to detect and recognize not just images and videos, but also the various elements within them, such as people, objects, animals and even sentiments, emotional and other parameters-based capabilities to name a few.

Artificial intelligence is now further evolving across various industries and sectors. For a better perception of how artificial intelligence is evolving and aiding industries, here are some illustrations:

Transport: Computer vision aids in a better experience for transport, as video analytics combined with Automatic number plate recognition can help in tracking and tracing violators of traffic safety laws (speed limits and lane violation etc.) and stolen or lost cars, as well as in toll management and traffic monitoring and controlling.

Aviation: Visual AI can help in providing prompt assistance for elderly passengers and for those requiring assistance (physically challenged, pregnant women etc.); it can also be useful in creating a new face-as-a-ticket option for easy and fast boarding for passengers, in tracking down lost baggage around the airport as well as in security surveillance on passengers and suspicious objects (track and trace objects and passengers relevant to it).

Manufacturing: Computer vision comes into play here by analyzing every component of the production line and diagnose from a minor defect to any permanent damage and harm pertaining to it, proactively, allowing maintenance to be carried out seamlessly.

Global AI Market | Grand View Research

Artificial Intelligence and its applications can be applied in many other industries, including but not limited to Aviation, Highway, Agriculture, Sports, Mining, BFSI and many more. Currently, the rapid evolution of artificial intelligence is enabling it to be as efficient as the human eye at recognizing and referencing objects and patterns and along with decision science algorithm enables decision with moderate or high level of confidence to authorities or industry specifics.

The future of computer vision is very promising with the rapidly advancing technologies, including some technologies such as visual referencing and decision science-based automated cars and automated checkouts are already here today.

New-age visual-based artificial intelligence and computer vision-based solutions are getting innovative and interactive. They specialize more specifically in algorithms for face, image, object, video, sentiment, emotion recognition and more. Most of these solutions run on a patented platform- like we use Gryphos. These platforms create a deep learning system with the use of machine learning and neural networks.

Few of AIs sectorial deployment

Automatic Number plate recognition to catch reckless drivers and speedsters on highways

Computer vision-based predictive maintenance for production machines manufacturing sectors

An application utilizing body behavioral analysis to help in finding and assisting elderly or physically impaired individuals at the airports

Video recognition-based technology that allowed for tracking down of stolen artwork

A variety of solutions for enhancing security and customer satisfaction for a Columbian holding BFSI company

Monitoring and alerting of staff who are not wearing safety equipment, or maintaining hygiene around open food packages at one of the largest in-flight catering service providers

Authenticating students for the online examination via face recognition and monitoring student body behavior during the examinations.

Benefits of AI solutions

AI has been a boon and responsible for providing state-of-the-art solutions to many sectors and industries such as transport, aviation, banking, manufacturing and many more. Its an ever-evolving industry which is helping other industries to achieve precision-based perfection.

(Dr. Shreeram Iyer, Chairman & Group CEO Prisma AI. Views are personal)

(To receive our E-paper on whatsapp daily, please click here. We permit sharing of the paper's PDF on WhatsApp and other social media platforms.)

The rest is here:

How visual-based AI is evolving across industries - Free Press Journal

Posted in Ai | Comments Off on How visual-based AI is evolving across industries – Free Press Journal

Lang.ai looks to help orgs extract value from customer conversations, with AI – VentureBeat

Posted: at 10:12 pm

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!

Turning conversations from customer support requests to user feedback into tangible business value is no easy task. Its also an ideal use case for AI-based automation.

Among the vendors helping organizations use AI to derive value from customer conversations is San Francisco-based Lang, which announced today that it has raised $10.5 million in a series A round of funding. Langs platform integrates with help desk, customer relationship management and user-facing operations for feedback and requests. The system uses an unsupervised learning model to adapt to the constantly changing flow of information by categorizing data and then helping to determine what should be done with the data to help improve user experience and business outcomes.

There has been a growth in the volume of conversations that business teams have to deal with, especially things like customer support, which has been accentuated during the pandemic, Jorge Pealva, CEO of Lang, told VentureBeat. Sure, there are a lot of AI technologies, but in general, theyve been built by engineers for engineers so they have a lot of complexity. We believe there should be a better way for business users to use AI.

Lang certainly isnt alone in its corner of the market. Zendesk, for example, has built out its AI capabilities in recent years to help with its customer service platform. A core element of its capabilities came from the companys 2021 acquisition of Cleverly.ai.

CRM giant Salesforce is also very active in the AI space with its Einstein platform. Contact center technology vendor Genesys actively continues to grow its AI capabilities with its Google partnership.

A recent report from Fortune Business Insights estimated the size of the global customer experience management market at $11.3 billion in 2022. The report forecasts the market to grow at a compound annual growth rate (CAGR) of 16.2% over the next seven years, reaching $35.5 billion by 2029.

Pealva is keenly aware of the market potential and the competition. In his view, Lang provides a differentiated approach thanks to the use of an unsupervised AI model.

A common approach to enabling AI is the use of a supervised model that trains against a given set of data. The challenge with the supervised model is that AI is often trained on static data. Pealva noted that data changes quickly and for organizations to truly be responsive to users, training on static data isnt good enough. Thats why his company developed a purpose-built unsupervised learning model which is constantly looking at data that is constantly changing.

How it works: Lang connects to the customer data and the unsupervised model analyzes the data, transforming it into simple concepts which Pealva explained is a business term for an item or operation that a company needs to track. A concept could be a delivery date, a product, or a credit rating, for example. The AI model extracts the key concepts in a conversation automatically, so they can be grouped into categories that make sense for a particular business.

The interface to the categories is provided to users in a no-code model, enabling an organization to group things as required. The no-code interface also helps to provide a form of explainable AI, so users can easily see how the unsupervised model extracted concepts and which categories the concepts are placed into.

Using AI to derive business value from conversations can also help organizations to scale operations.

One example is with Lang customer Ramp, which provides online tracking services for spending. According to Pealva, Ramps challenge was that it wanted to quickly scale up operationally. With Lang, Ramp was able to more rapidly categorize customer requests into categories and then provide automated workflows to accelerate resolution. For example, Ramp can make sure that an inquiry about a credit issue is routed to an agent that can respond quickly to that type of request.

Ramp also uses Lang to understand customer feedback. As Ramp builds out new products, feedback and requests are analyzed by Lang to better understand how the new product is being received and what if any changes need to be made to optimize user experience.

We really operationalize their support data for automation and also for internal insights that other teams can use, he said.

With the new series A funding in hand, Pealva wants to continue to help organizations more easily derive business value from data and help them to automate repetitive tasks.

We think a lot of companies are gonna be thinking these days about how they become more efficient, he said. There are a lot of inefficiencies when you think about the repetitive tasks that people are doing in their day-to-day jobs, when they really should focus on more high-level tasks, Pealva said.

The new funding round was led by Nava Ventures and included the participation of Oceans Ventures, Forum and Flexport Fund.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Read more:

Lang.ai looks to help orgs extract value from customer conversations, with AI - VentureBeat

Posted in Ai | Comments Off on Lang.ai looks to help orgs extract value from customer conversations, with AI – VentureBeat

Robot Artists And Us: Who Decides The Aesthetics Of AI? – Worldcrunch

Posted: at 10:12 pm

Ai-Da produces portraits of sitting subjects using a robotic hand attached to her lifelike feminine figure. Shes also able to talk, giving detailed answers to questions about her artistic process and attitudes towards technology. She even gave a TEDx talk about The Intersection of Art and AI (artificial intelligence) in Oxford a few years ago. While the words she speaks are programmed, Ai-Das creators have also been experimenting with having her write and perform her own poetry.

But how are we to interpret Ai-Das output? Should we consider her paintings and poetry original or creative? Are these works actually art?

What discussions about AI and creativity often overlook is the fact that creativity is not an absolute quality that can be defined, measured and reproduced objectively. When we describe an object for instance, a childs drawing as being creative, we project our own assumptions about culture onto it.

Indeed, art never exists in isolation. It always needs someone to give it art status. And the criteria for whether you think something is art is informed by both your individual expectations and broader cultural conceptions.

If we extend this line of thinking to AI, it follows that no AI application or robot can objectively be creative. It is always us humans who decide if what AI has created is art.

In our recent research, we propose the concept of the Lovelace effect to refer to when and how machines such as robots and AI are seen as original and creative. The Lovelace effect named after the 19th century mathematician often called the first computer programmer, Ada Lovelace shifts the focus from the technological capabilities of machines to the reactions and perceptions of those machines by humans.

The programmer of an AI application or the designer of a robot does not just use technical means to make the public see their machine as creative. This also happens through presentation: how, where and why we interact with a technology; how we talk about that technology; and where we feel that technology fits in our personal and cultural contexts.

Ai-Da standing next to her self-portrait in exhibition "Ai-Da: Portrait of the Robot."

aidarobot/ Instagram

Our reception of Ai-Da is, in fact, informed by various cues that suggest her human and artist status. For example, Ai-Das robotic figure looks much like a human shes even called a she, with a feminine-sounding name that not-so-subtly suggests an Ada Lovelace influence.

This femininity is further asserted by the blunt bob that frames her face (although she has sported some other funky hairstyles in the past), perfectly preened eyebrows and painted lips. Indeed, Ai-Da looks much like the quirky title character of the 2001 film Amlie. This is a woman we have seen before, either in film or our everyday lives.

Ai-Da also wears conventionally artsy clothing, including overalls, mixed fabric patterns and eccentric cuts. In these outfits, she produces paintings that look like a human could have made them, and which are sometimes framed and displayed among human work.

We also talk about her as we would a human artist. An article in the Guardian, for example, gives a shout-out to the world premier of her solo exhibition at the 2022 Venice Biennale. If we didnt know that Ai-Da was a robot, we could easily be led to appreciate her work as we would that of any other artist.

Some may see robot-produced paintings as coming from creative computers, while others may be more sceptical, given the fact that robots act on clear human instructions. In any case, attributions of creativity never depend on technical configurations alone no computer is objectively creative. Rather, attributions of computational creativity are largely inspired by contexts of reception. In other words, beauty really is in the eye of the beholder.

As the Lovelace effect shows, through particular social cues, audiences are prompted to think about output as art, systems as artists, and computers as creative. Just like the frames around Ai-Das paintings, the frames we use to talk about AI output indicate whether or not what we are looking at can be called art. But, as with any piece of art, your appreciation of AI output ultimately depends on your own interpretation.

Leah Henrickson, Lecturer in Digital Media, University of Leeds et Simone Natale, Associate Professor in Media Theory and History, Universit di Torino

This article is republished from The Conversation under a Creative Commons license. Read the original article.

See the original post:

Robot Artists And Us: Who Decides The Aesthetics Of AI? - Worldcrunch

Posted in Ai | Comments Off on Robot Artists And Us: Who Decides The Aesthetics Of AI? – Worldcrunch

TapType: AI-Assisted Hand Motion Tracking Using Only Accelerometers – Hackaday

Posted: at 10:12 pm

The team from the Sensing, Interaction & Perception Lab at ETH Zrich, Switzerland have come up with TapType, an interesting text input method that relies purely on a pair of wrist-worn devices, that sense acceleration values when the wearer types on any old surface. By feeding the acceleration values from a pair of sensors on each wrist into a Bayesian inference classification type neural network which in turn feeds a traditional probabilistic language model (predictive text, to you and I) the resulting text can be input at up to 19 WPM with 0.6% average error. Expert TapTypers report speeds of up to 25 WPM, which could be quite usable.

Details are a little scarce (it is a research project, after all) but the actual hardware seems simple enough, based around the Dialog DA14695 which is a nice Cortex M33 based Bluetooth Low Energy SoC. This is an interesting device in its own right, containing a sensor node controller block, that is capable of handling sensor devices connected to its interfaces, independant from the main CPU. The sensor device used is the Bosch BMA456 3-axis accelerometer, which is notable for its low power consumption of a mere 150 A.

The wristband units themselves appear to be a combination of a main PCB hosting the BLE chip and supporting circuit, connected to a flex PCB with a pair of the accelerometer devices at each end. The assembly was then slipped into a flexible wristband, likely constructed from 3D printed TPU, but were just guessing really, as the progression from the first embedded platform to the wearable prototype is unclear.

What is clear is that the wristband itself is just a dumb data-streaming device, and all the clever processing is performed on the connected device. Training of the system (and subsequent selection of the most accurate classifier architecture) was performed by recording volunteers typing on an A3 sized keyboard image, with finger movements tracked with a motion tracking camera, whilst recording the acceleration data streams from both wrists. There are a few more details in the published paper for those interested in digging into this research a little deeper.

The eagle-eyed may remember something similar from last year, from the same team, which correlated bone-conduction sensing with VR type hand tracking to generate input events inside a VR environment.

See the article here:

TapType: AI-Assisted Hand Motion Tracking Using Only Accelerometers - Hackaday

Posted in Ai | Comments Off on TapType: AI-Assisted Hand Motion Tracking Using Only Accelerometers – Hackaday

AI-enhanced super workers could be a reality with this AR headband that can be fastened to industrial helmets – Yanko Design

Posted: at 10:12 pm

Frontline workers in hazardous industrial environments are often overworked due to shortage of labor and are exposed to perilous situations, which can lead to errors and increase the proportion of man-made hazards. Since state-of-the-art technology is changing the face of other industries; it is only fitting to integrate augmented reality into the helmet the most important accessory of frontline workers at oil & gas plants, power, aviation, railway, and many such industries to solve these problems.

Enter X-Craft the first augmented reality device to achieve an explosion-proof protection rating. Designed by Rokid, the X-Craft is created in order to bring a technological transformation in the industry and produce a generation of super workers. Basically, this is an industrial explosion-proof AR headband that can fit around safety helmets and hard hats to armor frontline workers with technology that can facilitate in inspections, remote collaborations, trainings, and day-to-day operations.

Designer: Rokid

The headband in addition to AI and AR integration is also embedded with a 5G module to ensure brisk processing and real-time information storage and transmission. The headband is further equipped with a 40 field of view (FoV) display right in front of the eye and has a movable camera positioned just above. A secondary camera flip to switch is placed further up around the forehead (when the headband is worn). The display employs waveguide optical technology to ensure it has a see-through aesthetic with high contrast and light transmission of up to 80 percent.

For the ones who work in more high-risk environments, the headband featuring a user-friendly control knob on the temple can be further attached with other peripherals and accessories such as industrial endoscope, infrared sensors, etc to enhance its capabilities and be more assistive to workers. Even with all the tech embedded and the possibility of additional attachments, the headband remains comfortable to wear. Its weight is evenly distributed and the headbands detachable buckle ensures it can be wrapped around a large variety of helmets and hard hats.

Born to assist super workers in the highest-risk environments, the X-Craft is made to beat the elements. The IP66 water and dustproof rated headband can easily process large amounts of information and data over the cloud and facilitate real-time remote collaboration between teams. To ensure what is seen and transmitted is without a glitch, the X-Craft features three AI-enabled noise reduction mics that pick accurate sounds in the nosiest industrial environments.

Continued here:

AI-enhanced super workers could be a reality with this AR headband that can be fastened to industrial helmets - Yanko Design

Posted in Ai | Comments Off on AI-enhanced super workers could be a reality with this AR headband that can be fastened to industrial helmets – Yanko Design

AI surveillance cameras now being used to detect potential threats – FOX 5 New York

Posted: at 10:12 pm

Using artificial intelligence to detect guns

A Manhattan company is using AI to help security cameras to quickly alert people to possible threats.

NEW YORK - It's just a simulation, but it feels real, as the employee carrying out the demonstration inside Actuate offices in Midtown Manhattan walks in carrying a mock AR-15 rifle.

In a matter of seconds, after the employee is visible in the office space, a surveillance camera in the room goes into action.

A green flash pulsates across the monitor set up in another corner of the room after the video management system recognizes a potential threat in the room.

"As you can see it's pulsing green within about a second or two. That means an alert has been registered," says Actuate CEO and co-Founder Sonny Tai. "Our AI model has made a detection and sent an alert to video management systems."

The company is utilizing emerging technology that uses artificial intelligence software for surveillance cameras to detect potential problems.

"We initially built our company as a gun detection company in response to a lot of active shooter threats that happened over the years," Tai said.

After the Las Vegas shooting in 20017, Tai who is a former captain in the US Marine Corps, surveyed law enforcement agencies across the country asking what could be done.

"A common refrain heard from a lot of them was a wish security cameras could automatically identify threats," he said.

Most business or office buildings or public spaces already have cameras.

Actuate doesn't actually install any new devices, instead, the company connects cameras online in an encrypted way and then runs its algorithms that are programmed to search for problems.

"99% plus of cameras are used in a very forensic way. Which means that if something bad happens, they come back and review the footage hours or even days later and try to catch a bad guy," Tai said. "What we're looking to do is in a non-privacy intrusive way, so we don't track any facial recognition, or biometrics or PI, but be able to identify potential indicators of threats to safety and security, and send these alerts out in real-time to people who would need to respond."

It could be at a used car lot or construction site at night with cameras programmed to notify when a suspicious person walks near a fence.

Or think back to our first example.

Ellie walks into an office carrying a gun, after it's detected, a notification is sent to security teams or anyone watching a bank of monitors. Alerting them as to when and where the threat is unfolding.

"This is something that they'll get an alert within a couple of seconds to be able to pull this up and you'll be able to see the bounding box drawn around the weapon that's detected."

Of course, there are privacy concerns.

"That's something that we as a company we made a conscious decision about. It's actually in our company values on the website is that we don't do any form of facial recognition, we don't track any form of skin color or even the color of your clothing or anything like that," Sonny says. "We just want to identify objectively what is happening security camera frame, as for somebody who's pulling out a weapon, or if somebody's trespassing

What about what happened in NYC this April, the mass shooting on a Brooklyn subway car, people then desperately escaping onto a Sunset Park subway platform.

The MTA cameras infamously were not working at multiple locations.

If they were working and connected to the internet, Tai says Actuate's technology may not have been able to prevent what the gunman did, but it could have made a difference in the precious seconds afterward.

"We can immediately react and send security teams to go in and neutralize the threat. and perhaps equally importantly, if we send this information to the MTA, for example, they can better understand the situation or ground executor defensive evacuation measures in a much more timely way. so the people who need to evacuate notice the situation at hand instead of just operating that fog of war, chaos, and confusion."

The Actuate technology is currently being utilized by 20,000 cameras across the country including at schools, college campuses, construction sites, and used car lots.

Could it make a dent in the crime wave gripping New York City?

Sonny says it could be a tool in the larger scope of problems, if the cameras are working.

See the rest here:

AI surveillance cameras now being used to detect potential threats - FOX 5 New York

Posted in Ai | Comments Off on AI surveillance cameras now being used to detect potential threats – FOX 5 New York

BPM 4.0: Indias back-office industry is AI-skilling its workforce – BusinessLine

Posted: at 10:12 pm

Indias vibrant IT-enabled services (ITeS) sector commonly referred to as Business Process Management (BPM), is reinventing itself, yet again, to both adapt and shape the nature of demand coming from global customers.

According to a panel of eminent speakers at a BusinessLine-Nasscom roundtable on how BPM 4.0 heralding into an AI opportunitythe sector, is training its workforce to be more nimble, and artificial intelligence (AI) skilled, to enable win-win business outcomes for customers and all stakeholders.

The addressable global BPM market this year is $254 billion and is estimated to grow to $336 billion by 2025. Over the last couple of decades, India has emerged as the pre-eminent back office of the world.

This year according to Nasscom estimates, the Indian BPM industry would be at $44 billion which includes delivering services revolving around enterprise front and back office, contact centers apart from vertical oriented BPM. The sector employs more than 1.4 million people directly and several multiples of it indirectly.

The Indian BPM industry has undergone a paradigm shift over the few decades of its existence. While BPM 1.0 was all about cost-saving and labour arbitrage, in BPM 2.0 the emphasis was on operational excellence through efficiency and quality of processes. BPM 3.0 heralded the advent of deep technology and domain expertise.

However, the pandemic of the recent past has seen customers laying stress on enabling business outcomes, resilience, and agility. This is leading to the latest iteration of the industry through BPM 4.0 with players investing in training and re-orienting their workforce to enable this.

Sukanya Selvarajan, Innovation Head and CFO operations at Tata Consultancy Services, said that the workforce in BPM 4.0 needs to be design thinking led, data and digital-driven apart from the usual non-negotiables of being domain and technology-centric.

Prashant Achanta, CTO of Firstsource, said that there was a continuum between BPM 3.0 and the latest avatar for now at least and highlighted how Indian players could take advantage of the emerging opportunities in the sector. Amneet Chowdhury, VP and Global Head for Infrastructure Technologies at Sutherland Global Services, stressed how building intellectual property and platforms using AI and ML is giving an edge to Indian players.

Published onMay 15, 2022

The rest is here:

BPM 4.0: Indias back-office industry is AI-skilling its workforce - BusinessLine

Posted in Ai | Comments Off on BPM 4.0: Indias back-office industry is AI-skilling its workforce – BusinessLine

Leonard: Is It All In Your Head? Stress, Disease And The Mind-Body Connection – Los Alamos Daily Post

Posted: at 10:10 pm

By Laura Leonard Doctor of ChiropracticLos Alamos

Getting sick isnt exactly all in your head but your thoughts about things do play a critical part in how well your bodys physiology handles stress. Psychoneuroimmunology is a field of study that investigates the interactions between thoughts and physiology.

Since the 1960s, research into this field has shown that our thinking patterns directly affect immune, nervous and hormonal systems. Thoughts about life stressors have such a large impact on health that we cant afford to ignore this interaction as we go through life.

Research on patients with various types of cancer tells us that patients with depression and/or a history of multiple stressful life events have lower survival rates. In fact, these risk factors have more impact on survival than a history of alcohol and tobacco use. Perceived social isolation and loneliness are also predictors of cancer survival.

Chronic daily stress also leads to disease especially when we feel like we have no control over our circumstances. Emotions like anger are well known to increase the likelihood of having a heart attack. Mind-body-health interactions are the reason why I teach my patients to take control of their thoughts and personal actions rather than worrying about other people and events they have no influence over.

At the end of the day that is where health starts, with personal empowerment and choosing not to be a victim to the things going on around us. Where the tough work begins is being willing to look at your subconscious thoughts. Negative thinking patterns that are acquired in childhood and filed away until something triggers us.

Our brains are wired to keep these traumas filed away until something in the environment brings it up. PTSD, anxiety and depression are manifestations of these triggers. When our brain gets triggered in the now, our physiology has no idea if this is an imminent threat or if its overreacting. Dealing with our active thoughts in an empowering way is only part of the puzzle. Many of our beliefs about ourselves and the world are subconscious and stored away until something triggers those thoughts to come out. Sadly, most of what we store in the subconscious is negative because that is what allowed our ancestors to survive. In the modern world, these stored thoughts are responsible for making us sick.

Awareness of mind-body-health is becoming mainstream thinking and there are many online resources to get you started. Apps like Headspace, UCLA Mindful, iBreathe and Mindset: Daily Motivation all provide simple ways to start shifting your mind and physiology on the daily. If you are ready to dig into your subconscious, my personal preference is paying attention to emotional triggers and journaling about them when they arise. Triggers always have a deeper origin and I believe those roots exist in childhood and past traumas.

If you like apps, ThinkUp is a useful tool to get positive affirmations for healing old patterns once you start peeling back the onion.

Dr. Leonards practice focuses on posture and performance using a combination of soft tissue release, adjustments and exercise recommendations. She also coaches patients on nutrition, self-care and body awareness so they can manage themselves in between visits. Los Alamos Chiropractic Center is located in the Mary Deal building on Trinity.

Link:

Leonard: Is It All In Your Head? Stress, Disease And The Mind-Body Connection - Los Alamos Daily Post

Posted in Personal Empowerment | Comments Off on Leonard: Is It All In Your Head? Stress, Disease And The Mind-Body Connection – Los Alamos Daily Post