How technologies such as AI, ML, and deep learning are paving the way for the digitalization of the constructi – Times of India

The construction industry has been long reliant on tedious manual work, obsolete practices, legacy systems and exhausting paperwork. Moreover, the global construction industry has been relatively slow to adopt the technology. However, the advent of cutting-edge technologies such as AI, ML and Deep Learning in the AEC (Architecture, Engineering & Construction) sector is transforming the way things are done in the industry.

AI-led Digital Transformation

By harnessing the power of AI, construction industry stakeholders can boost efficiency, cost-savings, agility & profitability through automation of tedious manual processes and replacement of legacy systems and paperwork with digitalization. Moreover, AI-powered autonomous and semi-autonomous functionalities aid in streamlining and pacing up construction project completion. Digitalizing the construction projects leveraging AI enables AEC stakeholders to mitigate risk and enforce safety protocols for better work operations. AI makes the identification of pre and post-construction issues easier and also helps in finding timely solutions to crises proactively. AI can help in empowering real-time decisions through automatic alerts and notifications. AI helps the construction industry to overcome the challenges such as the collaboration of offsite and onsite resources, inappropriate safety measures, labor shortages, cost overruns and improper schedule management.

Proactive planning and management with ML

In the current times, machine learning has been steadily gaining buzz in the AEC industry.

Machine learning is dynamically helping to improve safety, and boost productivity, quality and vital measures. The innovative technology helps in improving construction designs and planning processes with a high level of precision and estimation. While ML-led digitalization in the construction industry can enable AEC firms to bolster decision-making, make informed predictions, streamline business and workflow operations, proactively manage clients expectations and be future-ready.

Streamlined processes with Deep Learning

Leveraging deep learning not just helps in proactive streamlining of the construction processes but also effectively manages construction project biddings, site planning and management, resource & asset planning, risk management, cost management, communication with clients, and health and safety management.

We hope you can envision what we see that AI, ML & Deep Learning in the AEC industry have even more exciting possibilities. These technologies will influence the future of construction and bring about a positive transformation.

Views expressed above are the author's own.

END OF ARTICLE

Read more:
How technologies such as AI, ML, and deep learning are paving the way for the digitalization of the constructi - Times of India

Heard on the Street 9/12/2022 – insideBIGDATA

Welcome to insideBIGDATAs Heard on the Street round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Enjoy!

Eliminating off-label AI. Commentary by Triveni Gandhi, Responsible AI Lead at Dataiku

The healthcare industry is well known for its off-label drug use. We see this all the time where a drug approved for heart concerns may later be prescribed to improve mental health outcomes even though it was never formally reviewed for that purpose. Off-label use proliferates for many reasons: perhaps there are no suitable approved drugs for a condition or other approved drugs havent worked. Surprisingly to many, this happens all the time. In AI, many practitioners have taken a similar approach, and its a grave mistake. Off-label AI is when practitioners take a successful model for a certain situation and re-use it for others. For example, in the legal field judges have used AI-informed sentencing guidelines, which turned out to be heavily biased against people of color. However, the model used was actually taken from a different application intended to identify potential criminal re-offenders and offer support to minimize recidivism. This copy-paste approach to AI embodies the perils of off-label AI even with the best intentions and must be eliminated to build trust in the field.

How MLOps can be something of a silver bullet in the era of digital transformation and complex data problems if used strategically. Commentary by Egido Terra, Senior Data Product Manager, Talend

As data volume and complexity continues to grow, ML is gaining more importance to ensure data health. The value of mature data management is already immeasurable. However, many professionals fail to understand the requirements of successful automation. In order to unleash the full potential of ML, MLOps must be leveraged for solving complex problems with highly specific, tailored solutions. MLOps the discipline of deploying and monitoring machine learning models can be something of a silver bullet in the era of digital transformation and complex data problems if used strategically. Automation is a must when it comes to properly managing data and ML; developing models wont be sufficient unless MLOps is used to quickly identify problems, optimize operations, find issues in the data, and allow smooth and successful execution of ML applications. The alternative is hard-to-manage ad-hoc deployments and longer release cycles, where time-consuming human intervention and error is all too common. The benefits of issue-specific ML applications for data health are endless. A dedicated investment in MLops to ensure your automation priorities are well-structured will pay off in the short and long term. As a result, harmful data will be kept out of applications, and solutions will come quicker with a significant impact.

How To Level Up Data Storage In The Growing Datasphere. Commentary by Jeff Fochtman, senior vice president of marketing, Seagate Technology

The global dataspherethat is, all the data created, consumed, and stored in the worlddoubles in size every 3 years. Its a mind-blowing growth. How business leaders treat all this data matters. It matters because data is an immensely valuable, if overlooked, business currency. Organizations that find themselves deluged by more and more data should focus on converting this data into insights, and those insights into business value. Likely, if your organization is handling data sets that are 100TB and more, you already store some of this data in the multicloud. Unfortunately, 73% of business leaders report that they can only save and use a fraction of their data because of growing costs associated with data storage and movement. What can you do about it today? Learn from companies that win at business by taking 5 practical steps: 1) They are a lot more likely to consistently use predictive third-party software tools that help anticipate and measure the costs of cloud resources for every deployment decision. Do thatevery time. 2) Make sure to evaluate deployment criteria (performance, API, etc.) prior to deploying applications. 3) Monitor those characteristics once applications are up and running. 4) Invest in tools in addition to training. 5) Automate security and protection. What can you do about it in the near future? Some storage companies offer vendor-agnostic, frictionless data services with transparent, predictable pricing and no egress or API fees. To reclaim control over your data, look for those solutions.

New bill brings US closer to sustainability goals IoT will help us cross the finish line. Commentary by Syam Madanapalli, Director, IoT Solutions at NTT DATA Services

As the US pushes forward toward its sustainability goals with recent legislation that provides the most climate funding the country has ever seen, cleantech is at the forefront of our economy. Internet of Things (IoT) technology has the potential to play a key role in this sector through the reduction of carbon emissions and adoption of sustainable practices and to have far-reaching positive impacts both on business operations and for our environment. IoT and digital twin technologies allow for the connection of complex ecosystems, providing real-time data from the large variety of meters, sensors, systems, devices, and more that anorganization might use to measure carbon emissions, giving more insight into their carbon footprint than ever before. Once that IoT data is connected to digital twins in the cloud, advanced analytics can be used to identify and predict issues along the value chain and optimize operations. This will be an area of growth as leaders continue to look for ways to improve operations and reduce environmental impact.

Leverage existing AI/ML capabilities right now. Commentary by Don Kaye, CCO, Exasol

In todays data-driven world, there is a definitive need for organizations to use artificial intelligence (AI) and machine learning (ML) to move beyond simple reports and dashboards describing what has happened to predict with confidence what will happen. Forward-thinking companies are embracing AI and ML in an effort to develop thorough data strategies that link to their overall business objectives. Business processes today are not instantaneous but business leaders expect data-driven outcomes to be. This often leaves decision-makers and their teams in a pinch, especially as data consumerization continues to increase, and fast. This is where artificial intelligence and machine learning play an integral role. While these capabilities are often integrated within an organizations current technology stack, they are not getting leveraged to their fullest potential. Companies must use their existing AI/ML capabilities to improve access flows to data, gain commonality across various points of view at scale, and all within a fraction of the time it takes to sift through the typical data sets analysts are tasked with.

Its Time to Put Credit Scores in Context. Commentary by Steve Lappenbusch, Head of Privacy at People Data Labs

Last week, The Wall Street Journal reported that a coding error at consumer credit reporting agency Equifax lead the credit giant to report millions of erroneous credit scores to lenders across a three-week period in April and May of this year, a major challenge for lenders and credit seekers.While a credit score can shed some essential light on the subjects credit history and past interaction with lenders and payees, enriching a record with alternative data like work history, past addresses, and social media profiles can substantially expand a lenders understanding of who the customer is, and how legitimate their application may be. A history of social media profiles, email, and phone contacts with a long history of use, and a valid work history will all help to expedite the process of weeding out synthetic identities and other invalid applications fast, freeing up time to service legitimate applicants. Credit scores arent going anywhere. Theyll remain a critical tool for lenders looking to understand ability to repay, and the only permissible tool for determining credit worthiness. However, its easy to imagine a world in which alternative data sources can diminish the impact of inevitable errors like the one reported today. By providing a backstop of additional context and a new layer of identity on top of traditional credit bureau records, lenders no longer need to be tied to a single source of truth.

The value of embedded analytics driving product-led growth. Commentary by Sumeet Arora, Chief Development Officer, ThoughtSpot

Nowadays, our whole world is made up of data and that data presents an opportunity to create personalized, actionable insights that drive the business forward. But far too often, we see products that fail to equip users with data analysis within their natural workflow and without the need to toggle to another application. Today, in-app data exploration, or embedded analytics, is table stakes for product developers as it has become the new frontier for creating engaging experiences that keep users coming back for more. For example, an app like Fitbit doesnt just count steps and read heart rates. It gives users an overview of health and recommends actions that should be taken to keep moving, get better sleep, and improve overall well-being. Thats what employees and customers want to see in business applications. Insights should not be limited to business intelligence dashboards; they should be seamlessly integrated everywhere. Whether users are creating an HR app for recruiting, or a supply chain app for managing suppliers, embedded analytics can provide real-time intelligence in all these applications by putting the end-user in the drivers seat and giving them insights that are flexible and personalized.

Whats the deal with Docker (containers), and Kleenex (tissues)? Commentary by Don Boxley, CEO and Co-Founder, DH2i

I suppose you can say that Docker is to containers, what Kleenex is to tissues. However, the truth is that Docker was just the first to really bring containers into the mainstream. And, while Docker did it in a big way, there are other containerization alternatives out there. And thats a good thing because organizations are starting to adopt containers in production at breakneck speed in this era of big data and digital transformation. In doing so, organizations are enjoying major increases in portability, scalability and speed of deployment all checkboxes for organizations looking to embrace a cloud-based future. I am always excited to learn about how it is going for customers leveraging containers in production. Many have even arrived at the point of deploying their most business-critical SQL Server workloads in containers. The skys the limit for deployments of this sort, but only if you do it thoughtfully. Without a doubt, containerization adds another layer of complexity to the high availability (HA) equation, and you certainly cant just jump into it with container orchestration alone. What is necessary is approaching HA in a containerized SQL Server environment with a solution that enables fully-automatic failover of SQL Server Availability Groups in Kubernetesenabling true, bulletproof protection for any containerized SQL Server environment.

Why data collection must be leveraged to personalize customer experiences beyond retail. Commentary by Stanley Huang, co-founder and CTO, Moxo

Today, as more customers prefer to manage their business online, these interactions can feel impersonal. Its common for the retail industry to leverage data collection and spending algorithms in order to create customer profiles and predict the next best offer, as retail is a highly customer-centric business with buyers requiring on-demand service. Beyond the retail industry, high-touch industries, such as legal and financial services, are beginning to utilize data collection in order to more effectively service clients. By analyzing collected data from previous touchpoints, companies can create a holistic 360-degree view of each customer and gain a better understanding of how to interact with them based on their individual preferences. Data collected from a users past is the most relevant source to help contextualize client interactions and enable businesses to personalize the entire customer journey moving forward. This historical data collected from client interactions allows businesses to identify client pain points in the service process and make improvements in real time. In addition, the automation of processes can enable businesses to more quickly analyze collected data and reduce friction in the customer service process.

Its Time To Tap Into the Growing Role of AI, Data and Machine Learning in Our Supply Chain. Commentary by Don Burke, CIO at Transflo

The area of machine learning, AI, contextual search, natural language processing, neural networks and other evolving technologies allows for enhanced operational agility with the supply chain like never before.These technologies allow for adaptable digital workflows driving speed, efficiencies and cost savings. Digitizing and automating workflows allow organizations to scale, grow revenues, adapt faster and deliver a superior customer experience. For example, transportation generates volumes of required documents necessary to complete financial transactions among supply chain partners. The ability to apply deep learning machine models to classify and extract data from complex, unstructured documents (i.e., emails, PDFs, handwritten memos, etc.) not only drives efficient processing but unlocks actionable data accelerating business processes and decision-making! This equates to a real economic impact, whether by customer service excellence, speed of invoicing or significant cost savings.Above and beyond automating routine tasks and freeing up human resources for more high-valued opportunities, data becomes a valuable and harvestable area. Using these technologies to extract, process and merge data connects both the front and back office; allowing for hidden analytical insights and unseen patterns to be discovered and improve organizational decision-making by understanding customer behaviors, profitability of products/facilities, market awareness and more. In transportation, the sheer number of documents such as BOLs, PODs, Rate confirmation and accessorial hold untapped and unlocked insight that can be applied to reducing friction and complexity within the supply chain.

Bad Actors Still Want Your Data, But Are Changing Tactics of How to Get it. Commentary by Aaron Cockerill, chief strategy officer, Lookout

Bad actors are zeroing in on endpoints, apps and data being outside of the original corporate perimeter. In fact, theres a plethora of threat intelligence reports about how bad actors have moved from trying attacks on infrastructure to trying to attack endpoints, apps and data that are outside that perimeter. For example, many companies have had to move apps and servers that were behind a firewall, into the cloud (IaaS environments) and run them so they are internet accessible, but many of these apps and servers werent designed to be internet accessible and moving them outside of the perimeter introduces vulnerabilities that werent there when they were inside the corporate perimeter. Many server attacks these days leverage RDP; something that would not have been possible had the servers been behind a corporate perimeter. The same is true of endpoints, although the way an attack occurs tends to be less around gaining access to RDP and more frequently involving phishing and social engineering to gain access and move laterally to critical infrastructure and sensitive data.So, the attack surface has changed instead of looking for vulnerabilities inside the organizations perimeter, we are now looking for vulnerabilities in servers in the cloud and on endpoints that are no longer protected by the perimeter. But what has not changed is what the bad actors are seeking and it is very much focused on data. We hear a lot about ransomware, but what is not well understood yet, in the broader sense, is that ransomware typically is only successful when the bad actor has considerable leverage and the leverage they obtain is always through the theft of data and then the threat of exposure of the data what we call double extortion.

What is Vertical Intelligence? Commentary by Daren Trousdell, Chairman and CEO of NowVertical Group

Data transformation begins from the inside out. Businesses greatest challenge is staying hyper-competitive in an overly complicated world. Vertical Intelligence empowers enterprises to uplift existing tech stacks and staff with platform-agnostic solutions that can scale the modern enterprise. Vertical Intelligence is the key to unlocking the potential from within to bring transformation to the forefront.The idea of a purpose-built, top-to-bottom automation solution is antiquated. Yet the future is malleable: We see it as a flexible network of technologies that are platform-agnostic and prioritized to industry-specific use cases and needs. Most AI solutions currently available either require massive multi-year investment or for companies to mold their decision-making automation around a prefabricated solution that was either not built for their business or requires them to conform to specific constructs. We believe that technology should be made to serve a customer, not the other way around, and thats why weve brought together the best industry-specific technologies and thought leaders to shape the experience and prioritize the most critical use cases.

Digital acceleration is just a dream without open solution development acceleration platforms.Commentary by Petteri Vainikka, Cognite

We are in the era of the new, open platform architecture model. Businesses now stand a greater chance of truly transforming by thinking bigger and more broadly across their operations. Businesses that cling to the past and maintain fixed, impenetrable silos of data are doomed to stay in the past. Contrary to those maintaining past operating models, businesses that place their bets on open, truly accessible, and data product-centric digital platform architectures will be the ones experiencing the most rapid and rewarding digital acceleration. Because there is no single data product management platform that can meet all the various needs of a data-rich, complex industrial enterprise, open data domain specialized platforms are rising to the occasion. Such open platforms meet operations business needs by offering specialized industrial data operations technology packaged with proven composable reference applications to boost the ROI of data in a faster, more predictable way. With greater openness, domain specialization, and pre-built interoperability at the core, businesses can boost their data platform capabilities and simultaneously realize new data-rich solutions in less than three months. To stay in the lead in the digital transformation race, businesses must think about operationalizing and scaling hundreds of use cases rather than one-offs or single-case proofs of concept. They need open, composable platform architectures that serve to tear down data siloes while simultaneously delivering high-value business solutions with instant business impact. This will only happen with the right mix of specialized open data platform services orchestrated to work together like a symphony. Digital acceleration is just a dream without open solution development acceleration platforms.

Sign up for the free insideBIGDATAnewsletter.

Join us on Twitter:https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideBIGDATANOW

Excerpt from:
Heard on the Street 9/12/2022 - insideBIGDATA

Machine learning at the edge: The AI chip company challenging Nvidia and Qualcomm – VentureBeat

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

Todays demand for real-time data analytics at the edge marks the dawn of a new era in machine learning (ML): edge intelligence. That need for time-sensitive data is, in turn, fueling a massive AI chip market, as companies look to provide ML models at the edge that have less latency and more power efficiency.

Conventional edge ML platforms consume a lot of power, limiting the operational efficiency of smart devices, which live on the edge. Thosedevices are also hardware-centric, limiting their computational capability and making them incapable of handling varying AI workloads. They leverage power-inefficient GPU- or CPU-based architectures and are also not optimized for embedded edge applications that have latency requirements.

Even though industry behemoths like Nvidia and Qualcomm offer a wide range of solutions, they mostly use a combination of GPU- or data center-based architectures and scale them to the embedded edge as opposed to creating a purpose-built solution from scratch. Also, most of these solutions are set up for larger customers, making them extremely expensive for smaller companies.

In essence, the $1 trillion global embedded-edge market is reliant on legacy technology that limits the pace of innovation.

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

ML company Sima AI seeks to address these shortcomings with its machine learning-system-on-chip (MLSoC) platform that enables ML deployment and scaling at the edge. The California-based company, founded in 2018, announced today that it has begun shipping the MLSoC platform for customers, with an initial focus of helping solve computer vision challenges in smart vision, robotics, Industry 4.0, drones, autonomous vehicles, healthcare and the government sector.

The platform uses a software-hardware codesign approach that emphasizes software capabilities to create edge-ML solutions that consume minimal power and can handle varying ML workloads.

Built on 16nm technology, the MLSoCs processing system consists of computer vision processors for image pre- and post-processing, coupled with dedicated ML acceleration and high-performance application processors. Surrounding the real-time intelligent video processing are memory interfaces, communication interfaces, and system management all connected via a network-on-chip (NoC). The MLSoC features low operating power and high ML processing capacity, making it ideal as a standalone edge-based system controller, or to add an ML-offload accelerator for processors, ASICs and other devices.

The software-first approach includes carefully-defined intermediate representations (including the TVM Relay IR), along with novel compiler-optimization techniques. This software architecture enables Sima AI to support a wide range of frameworks (e.g., TensorFlow, PyTorch, ONNX, etc.) and compile over 120+ networks.

Many ML startups are focused on building only pure ML accelerators and not an SoC that has a computer-vision processor, applications processors, CODECs, and external memory interfaces that enable the MLSoC to be used as a stand-alone solution not needing to connect to a host processor. Other solutions usually lack network flexibility, performance per watt, and push-button efficiency all of which are required to make ML effortless for the embedded edge.

Sima AIs MLSoC platform differs from other existing solutions as it solves all these areas at the same time with its software-first approach.

The MLSoC platform is flexible enough to address any computer vision application, using any framework, model, network, and sensor with any resolution. Our ML compiler leverages the open-source Tensor Virtual Machine (TVM) framework as the front-end, and thus supports the industrys widest range of ML models and ML frameworks for computer vision, Krishna Rangasayee, CEO and founder of Sima AI, told VentureBeat in an email interview.

From a performance point of view, Sima AIs MLSoC platform claims to deliver 10x better performance in key figures of merit such as FPS/W and latency than alternatives.

The companys hardware architecture optimizes data movement and maximizes hardware performance by precisely scheduling all computation and data movement ahead of time, including internal and external memory to minimize wait times.

Sima AI offers APIs to generate highly optimized MLSoC code blocks that are automatically scheduled on the heterogeneous compute subsystems. The company has created a suite of specialized and generalized optimization and scheduling algorithms for the back-end compiler that automatically convert the ML network into highly optimized assembly codes that run on the machine learning-accelerator (MLA) block.

For Rangasayee, the next phase of Sima AIs growth is focused on revenue and scaling their engineering and business teams globally. As things stand, Sima AI has raised $150 million in funding from top-tier VCs such as Fidelity and Dell Technologies Capital. With the goal of transforming the embedded-edge market, the company has also announced partnerships with key industry players like TSMC, Synopsys, Arm, Allegro, GUC and Arteris.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Link:
Machine learning at the edge: The AI chip company challenging Nvidia and Qualcomm - VentureBeat

4 Ways AI, Analytics and Machine Learning Are Improving Customer Service and Support – CMSWire

Many of todays marketing processes are powered by AI and machine learning. Discover how these technologies are shaping the future of customer experience.

By using artificial intelligence (AI) and machine learning (ML) along with analytics, brands are in a much better position to elevate customer service experiences at every touchpoint and create positive emotional connections.

This article will look at the ways that AI and ML are used by brands to improve customer service and support.

AI improves the customer service journey in several ways, including tracking conversations in real-time, providing feedback to service agents and using intelligence to monitor language, speech patterns and psychographic profiles to predict future customer needs.

This functionality can also drastically enhance the effectiveness of customer relationship management (CRM) and customer data platforms (CDP).

CRM platforms, including C2CRM, Salesforce Einstein and Zoho, have integrated AI into their software to provide real-time decisioning, predictive analysis and conversational assistants, all of which help brands more fully understand and engage their customers.

CDPs, such as Amperity, BlueConic, Adobes Real-Time CDP and ActionIQ, have also integrated AI into more traditional capabilities to unify customer data and provide real-time functionality and decisoning. This technology enables brands to gain a deeper understanding of what their customers want, how they feel and what they are most likely to do next.

Related Article: What's Next for Artificial Intelligence in Customer Experience?

Artificial intelligence and machine learning are now used for gathering and analyzing social, historical and behavioral data, which allows brands to gain a much more complete understanding of their customers.

Because AI continuously learns and improves from the data it analyzes, it can anticipate customer behavior. As such, AI- and ML-driven chatbots can provide customers with a more personalized, informed conversation that can easily answer their questions and if not, immediately route them to a live customer service agent.

Bill Schwaab, VP of sales, North America for boost.ai, told CMSWire that ML is used in combination with AI and a number of other deep learning models to support todays virtual customer service agents.

ML on its own may not be sufficient to gain a total understanding of customer requests, but its useful in classifying basic user intent, said Schwaab, who believes that the brightest applications of these technologies in customer service find the balance between AI and human intervention.

Virtual agents are becoming the first line in customer experience in addition to human agents, he explained. Because these virtual agents can resolve service queries quickly and are available outside of normal service hours, human agents can focus on more complex or valuable customer interactions. Round-the-clock availability provides brands with additional time to capture customer input and inform better decision-making.

Swapnil Jain, CEO and co-founder of Observe.AI, said that todays customer service agents no longer have to spend as much time on simpler, transactional interactions, as digital and self-serve options have reduced the volume of those tasks.

"Instead, agents must excel at higher-value, complex behaviors that meaningfully impact CX and revenue," said Jain, adding that brands are harnessing AI and ML to up-level agent skills, which include empathy and active listening. This, in turn, "drives the behavioral changes needed to improve CX performance at speed and scale."

Because customer conversations contain a goldmine of insights for improving agent performance, AI-powered conversation intelligence can help brands with everything from service and support to sales and retention, said Jain. Using advanced interaction analytics, brands can benefit from pinpointing positive and negative CX drivers, advanced tonality-based sentiment and intent analysis and evidence-based agent coaching.

Predictive analytics is the process of using statistics, data mining and modeling to make predictions.

AI can analyze large amounts of data in a very short time, and along with predictive analytics, it can produce real-time, actionable insights that can guide interactions between a customer and a brand. This practice is also referred to as predictive engagement and uses AI to inform a brand when and how to interact with each customer.

Don Kaye, CCO of Exasol, spoke with CMSWire about the ways brands are using predictive analytics as part of their data strategies that link to their overall business objectives.

Weve seen first-hand how businesses use predictive analytics to better inform their organizations decision-making processes to drive powerful customer experiences that result in brand loyalty and earn consumer trust, said Kaye.

As an example, he told CMSWire that banks use supervised learning or regression and classification to calculate the risks of loan defaults or IT departments to detect spam.

With retailers, weve seen them seeking the benefits of deep learning or reinforcement learning, which enables a new level of end-to-end automation, where models become more adaptable and use larger data volumes for increased accuracy, he said.

According to Kaye, businesses with advanced analytics also tend to have agile, open data architectures that promote open access to data, also known as data democratization.

Kaye is a big advocate for AI and ML and believes that the technologies will continue to grow and become routine across all verticals, with the democratization of analytics enabling data professionals to focus on more complex scenarios and making customer experience personalization the norm.

Related Article: What Customer-Centric Predictive Analytics Looks Like

AI-driven sentiment analysis enables brands to obtain actionable insights which facilitate a better understanding of the emotions that customers feel when they encounter pain points or friction along the customer journey as well as how they feel when they have positive, emotionally satisfying experiences.

Julien Salinas, founder and CTO at NLP Cloud, told CMSWire that AI is often used to perform sentiment analysis to automatically detect whether an incoming customer support request is urgent or not. "If the detected sentiment is negative, the ticket is more likely to be addressed quickly by the support team."

Sentiment analysis can automatically detect emotions and opinions by classifying customer text as positive, negative or neutral through the use of AI, natural language processing (NLP) and ML.

Pieter Buteneers, director of engineering in ML and AI at Sinch, said that NLP enables applications to understand, write and speak languages in a manner that is similar to humans.

"It also facilitates a deeper understanding of customer sentiment, he explained. When NLP is incorporated into chatbots and voice bots it permits them to have seemingly human-like language proficiency and adjust their tones during conversations.

When used in conjunction with chatbots, NLP can facilitate human-like conversations based on sentiment. So if a customer is upset, for example, the bot can adjust its tone to diffuse the situation while moving along the conversation, said Buteneers. This would be an intuitive shift for a human, but bots that arent equipped with NLP sentiment analysis could miss the subtle cues of human sentiment in the conversation, and risk damaging the customer relationship."

Buteneers added that breakthroughs in NLP are making an enormous difference in how AI understands input from humans. For example, NLP can be used to perform textual sentiment analysis, which can decipher the polarity of sentiments in text."

Similar to sentiment analysis, AI is also useful for detecting intent. Salinas said that its sometimes difficult to have a quick grasp on a user request, especially when the users message is very long. In that case, AI can automatically extract the main idea from the message so the support agent can act more quickly.

While AI and ML have continued to evolve, and brands have found many ways to use these technologies to improve the customer service experience, the challenges of AI and ML can still be daunting.

Kaye explained that AI models need good data to deliver accurate results, so brands must also focus on quality and governance.

In-memory analytics databases will become the driver of creation, storage and loading features in ML training tools given their analysis capabilities, and ability to scale and deliver optimal time to insight, said Kaye. He added that these tools will benefit from closer integration with the companys data stores, which will enable them to run more effectively on larger data volumes to guarantee greater system scalability.

Iliya Rybchin, partner at Elixirr Consulting, told CMSWire that thanks to ML and the vast amount of data bots are collecting, they are getting better and will continue to improve. The challenge is that they will improve in proportion to the data they receive.

Therefore, if an under-represented minority with a unique dialect is not utilizing a particular service as much as other consumers, the ML will start to discount the aspects of that dialect as outliers vs. common language, said Rybchin.

He explained that the issue is not caused by the technology or programming, but rather, it is the result of the consumer-facing product that is not providing equal access to the bot. The solution is more about bringing more consumers to the product vs. changing how the product is built or designed."

AI and ML have been incorporated into the latest generations of CDP and CRM platforms, and conversational AI-driven bots are assisting service agents and enhancing and improving the customer service experience. Predictive analytics and sentiment analysis, meanwhile, are enabling brands to obtain actionable insights that guide the subsequent interactions between a customer and a brand.

See more here:
4 Ways AI, Analytics and Machine Learning Are Improving Customer Service and Support - CMSWire

What’s the Difference Between Vertical Farming and Machine Learning? – Electronic Design

What youll learn

Sometimes inspiration comes in the oddest ways. I like to watch CBS News Sunday Morning because of the variety of stories they air. Recently, they did one on Vertical Farming - A New Form of Agriculture (see video below).

CBS News Sunday Morning recently did a piece on vertical farming that spawned this article.

For those who didnt watch the video, vertical farming is essentially a method of indoor farming using hydroponics. Hydroponics isnt new; its a subset of hydroculture where crops are grown without soil. Instead, the plants grow in a mineral-enriched water. This can be done in conjunction with sunlight but typically an artificial light source is used.

The approach is useful in areas that dont provide enough light, or at times or in locations where the temperature or conditions outside would not be conducive for growing plants.

Vertical farming is hydroponics taken to the extreme, with stacks upon stacks of trays with plants under an array of lights. These days, the lights typically are LEDs because of their efficiency and the ability to generate the type of light most useful for plant growth. Automation can be used to streamline planting, support, and harvesting.

A building can house a vertical farm anywhere in the world, including in the middle of a city. Though lots of water is required, its recycled, making it more efficient than other forms of agriculture.

Like many technologies, the opportunities are great if you ignore the details. Thats where my usual contrary nature came into play, though, since I followed up my initial interest by looking for limitations or problems related to vertical farming. Of course, I found quite a few and then noticed that many of the general issues applied to another topic I cover a lotmachine learning/artificial intelligence (ML/AI).

If you made it this far, you know how Im looking at the difference between machine learning and vertical farming. They obviously have no relationship in terms of their technology and implementation, but they do have much in common when one looks at the potential problems and solutions related to those technologies.

As electronic system designers and developers, we constantly deal with potential solutions and their tradeoffs. Machine learning is one of those generic categories that has proven useful in many instances. However, one must be wary of the issues underlying those flashy approaches.

Vertical farming, like machine learning, is something one can dabble in. To be successful, though, it helps to have an expert or at least someone who can quickly gain that experience. This tends to be the case with new and renewed technologies in general. I suspect significantly more ML experts are available these days for a number of reasons like the cost of hardware, but the demand remains high.

Vertical farming uses a good bit of computer automation. The choice of plants, fertilizers, and other aspects of hydropic farming are critical to the success of the farm. Then theres the maintenance aspect. ML-based solutions are one way of reducing the expertise or time required by the staff to support the system.

ML programmers and developers also are able to obtain easier-to-use tools, thereby reducing the amount of expertise and training required to take advantage of ML solutions. These tools often incorporate their own ML models, which are different than those being generated.

Hydroponics works well for many plants, but unfortunately for multiple others, thats not the case. For example, crops like microgreens work well. However, a cherry or apple tree often struggles with this treatment.

ML suffers from the same problem in that its not applicable to all computational chores. But, unlike vertical farms, ML applications and solutions are more diverse. The challenge for developers comes down to understanding where ML is and isnt applicable. Trying to force-fit a machine-learning model to handle a particular problem can result in a solution that provides poor results at high cost.

Vertical farms require power for lighting and to move liquid. ML applications tend to do lots of computation and thus require a good deal of power compared to other computational requirements. One big difference between the two is that ML solutions are scalable and hardware tradeoffs can be significant.

For example, ML hardware can improve performance thats orders of magnitude better than software solutions while reducing power requirements. Likewise, even software-only solutions may be efficient enough to do useful work even while using little power, simply because developers have made the ML models work within the limitations of their design. Vertical farms do not have this flexibility.

Large vertical farms do require a major investment, and theyre not cheap to run due to their scale. The same is true for cloud-based ML solutions utilizing the latest in disaggregated cloud-computing centers. Such data centers are leveraging technologies like SmartNIC and smart storage to use ML models closer to communication and storage than was possible in the past.

The big difference with vertical farming versus ML is scalability. Its now practical for multiple ML models to be running in a smartwatch with a dozen sensors. But that doesnt compare to dealing with agriculture that must scale with the rest of the physical world requirements, such as the plants themselves.

Still, these days, ML does require a significant investment with respect to development and developing the experience to adequately apply ML. Software and hardware vendors have been working to lower both the startup and long-term development costs, which has been further augmented by the plethora of free software tools and low-cost hardware thats now generally available.

Cut the power on a vertical farm and things come to a grinding halt rather quickly, although its not like having an airplane lose power at 10,000 feet. Still, plants do need sustenance and light, though theyre accustomed to changes over time. Nonetheless, responding to failures within the system is important to the systems long-term usefulness.

ML applications tend to require electricity to run, but that tends to be true of the entire system. A more subtle problem with ML applications is the source of input, which is typically sensors such as cameras, temperature sensors, etc. Determining whether the input data is accurate can be challenging; in many cases, designers simply assume that this information is accurate. Applications such as self-driving cars often use redundant and alternative inputs to provide a more robust set of inputs.

Vertical-farming technology continues to change and become more refined, but its still maturing. The same is true for machine learning, though the comparison is like something between a penny bank and Fort Knox. There are simply more ML solutions, many of which are very mature with millions of practical applications.

That said, ML technologies and applications are so varied, and the rate of change so large, that keeping up with whats availablelet alone how things work in detailcan be overwhelming.

Vertical farming is benefiting from advances in technology from robotics to sensors to ML. The ability to track plant growth, germination, and detecting pests are just a few tasks that apply across all of agriculture, including vertical farming.

As with many Whats the Difference articles, the comparisons are not necessarily one-to-one, but hopefully you picked up something about ML or vertical farms that was of interest. Many issues dont map well, like problems of pollination for vertical farms. Though the output of vertical farms will likely feed some ML developers, ML is likely to play a more important part in vertical farming given the level of automation possible with sensors, robots, and ML monitoring now available.

Read more from the original source:
What's the Difference Between Vertical Farming and Machine Learning? - Electronic Design

Kauricone: Machine learning tackles the mundane, making our lives easier – IT Brief New Zealand

A New Zealand startup producing its own servers is expanding into the realm of artificial intelligence, creating machine learning solutions that carry out common tasks while relieving people of repetitive, unsatisfying work. Having spotted an opportunity for the development of low-cost, high-efficiency and environmentally sustainable hardware, Kauricone has more recently pivoted in a fascinating direction: creating software that thinks about mundane problems, so we don't have to. These tasks include identifying trash for improved recycling, looking' at items on roads for automated safety, pest identification and in the ultimate alleviation of a notoriously sleep-inducing task counting sheep.

Managing director, founder and tech industry veteran Mike Milne says Kauricone products include application servers, cluster servers and internet of things servers. It was in this latter category that the notion emerged of applying machine learning at the network's edge.

Having already developed low-cost-low power edge hardware, we realised there was a big opportunity for the application of smart computing in some decidedly not-so-enjoyable everyday tasks, relates Milne. After all, we had all the basic building blocks already: the hardware, the programming capability, and with good mobile network coverage, the connectivity.

Situation

Work is just another name for tasks people would rather not do themselves, or that we cannot do for ourselves. And despite living in a fabulously advanced age, there is a persistent reality of all manner of tasks which must be done every day, but which don't require a particularly high level of engagement or even intelligence.

It is these tasks for which machine learning (ML) is quite often a highly promising solution. ML collects and analyses data by applying statistical analysis, and pattern matching, to learn from past experiences. Using the trained data, it provides reliable results, and people can stop doing the boring work, says Milne.

There is in fact more to it than meets the eye (so to speak) when it comes to computer image recognition. That's why Capcha' challenges are often little more than Identify all the images containing traffic lights': because distinguishing objects is hard for bots. ML overcomes the challenge through the training' mentioned by Milne: the computer is shown thousands of images and learns which are hits, and which are misses.

Potentially, there are as many use cases as you have dull but necessary tasks in the world, Milne notes. So far, we've tackled a few. Rocks on roads are dangerous, but monitoring thousands of kilometers of tarmac comes at a cost. Construction waste is extensive, bad for the environment and should be managed better. Sheep are plentiful and not always in the right paddock. And pests put New Zealand's biodiversity at risk.

Solution

Tackling each of these problems, Kauricone started with its own-developed RISC IoT server hardware as the base. Running Ubuntu and programmed with Python or other open-source languages, the servers typically feature 4GB memory and 128GB solid state storage, the solar-powered edge devices consume as little as 3 watts and run indefinitely on a single solar panel. This makes for a reliable, low-cost field-ready' device, says Milne.

The Rocks on Roads project made clear the challenges of simple' image identification, with Kauricone eventually running a training model around the clock for 8 days, gathering 35,000 iterations of rock images, which expanded to 3,000,000 identifiable traits (bear in mind, a human identifies a rock almost instantly, perhaps faster if hurled). With this training, the machine became very good at detecting rocks on the roads.

For a new project involving construction waste, the Kauricone IoT server will maintain a vigilant watch on the types and amounts of waste going into building-site skips. Trained to identify types of waste, the resulting data will be the basis for improving waste management and recycling or redirecting certain items for more responsible disposal.

Counting sheep isn't only a method for accelerating sleep time, it's also an essential task for farmers across New Zealand. That's not all as an ML exercise, it anticipates the potential for smarter stock management, as does the related pest identification test case pursued by Kauricone. The ever-watchful camera and supporting hardware manage several tasks: identifying individual animals, numbering them, and also monitoring grass levels, essential for ovine nourishment. Tested so far on a small flock, this application is ready for scale.

Results

Milne says the small test cases pursued by Kauricone to date are just the beginning and anticipates considerable potential for ML applications across all walks of life. There is literally no end to the number of daily tasks where computer vision and ML can alleviate our workload and contribute to improved efficiency and, ultimately, a better and more sustainable planet, he notes.

The Rocks on Roads project promises improved safety with a lower human' overhead, reducing or eliminating the possibility of human error. Waste management is a multifaceted problem, where the employment of personnel is rendered difficult owing to simple economics (and potentially stultifying work); New Zealand's primary sector is ripe for technologically powered performance improvements which could boost already impressive productivity through automation and improved control; and pest management can help the Department of Conservation and allied parties achieve better results using fewer resources.

It's early days yet, says Milne, But the results from these exploratory projects are promising. With the connectivity of ever-expanding cellular and low-power networks like SIGFOX and LoraWan, the enabling infrastructure is increasingly available even in remote places. And purpose-built low power hardware brings computing right to the edge. Now, it's just a matter of identifying opportunities and creating the applications.

For more information visit Kauricone's website.

Read the rest here:
Kauricone: Machine learning tackles the mundane, making our lives easier - IT Brief New Zealand

5 Ways Data Scientists Can Advance Their Careers – Spiceworks News and Insights

Data and machine learning people join companies with the promise of cutting-edge ML models and technology. But often, they spend 80% of their time cleaning data or dealing with data riddled with missing values and outliers, a frequently changing schema, and massive load times. The gap between expectation and reality can be massive.

Although data scientists might initially be excited to tackle insights and advanced models, that enthusiasm quickly deflates amidst daily schema changes, tables that stop updating, and other surprises that silently break models and dashboards.

While data science applies to a range of roles, from product analytics to putting statistical models in production, one thing is usually true: data scientists and ML engineers often sit at the tail end of the data pipeline. Theyre data consumers, pulling it from data warehouses or S3 or other centralized sources. They analyze data to help make business decisions or use it as training inputs for machine learning models.

In other words, theyre impacted by data quality issues but arent often empowered to travel up the pipeline earlier to fix them. So they write a ton of defensive data preprocessing into their work or move on to a new project.

If this scenario sounds familiar, you dont have to give up or complain that the data engineering upstream is forever broken. Make like a scientist and get experimental. Youre the last step in the pipe and putting models into production, which means youre responsible for the outcome. While this might sound terrifying or unfair, its also a brilliant opportunity to shine and make a big difference in your teams business impact.

Here are five things data scientists and ML analysts get out of defense mode and ensure that even if they didnt create data quality issues, theyd prevent them from impacting the teams that rely on data.

Business executives hesitate to make decisions based on data alone. A KPMG report showed that 60% of companies dont feel very confident in their data, and 49% of leadership teams didnt fully support the internal data and analytics strategy.

Good data scientists and ML engineers can help by increasing data accuracy, then getting it into dashboards that help key decision-makers. In doing so, theyll have a direct positive impact. But manually checking data for quality issues is error-prone and a huge drag on your velocity. It slows you down and makes you less productive.

Using data quality testing (e.g. with dbt tests) and data observability helps to ensure you find out about quality issues before your stakeholders do, winning their trust in you (and the data) over time.

Data quality problems can easily lead to an annoying blame game between data science, data engineering, and software engineering. Who broke the data? And who knew? And who is going to fix it?

But when bad data goes into the world, its everyones fault. Your stakeholders want the data to work so that the business can move forward with an accurate picture.

Good data scientists and ML engineers build accountability for all data pipeline steps with Service Level Agreements. SLAs define data quality in quantifiable terms, assigning responders who should spring into action to fix problems. SLAs help avoids the blame game entirely.

Trust is so fragile, and it erodes quickly when your stakeholders catch mistakes and start blaming. But what about when they dont catch quality issues? Then the model is poor, or bad decisions are made. In either case, the business suffers.

For example, what if you have a single entity logged as Dallas-Fort Worth and DFW in a database? When you test a new feature, everyone in Dallas Fort-Worth is shown as variation A and everyone in DFW is shown variation B. No one catches the discrepancy. You cant conclude users in the Dallas Fort-Worth area your test has been thrown off, and the groups havent been properly randomized.

Clear the path for better experimentation and analysis through a foundation of higher quality data. By using your expertise to boost quality, your data will become more reliable, and your business teams can run meaningful tests. The team can focus on what to test next instead of doubting the results of the tests.

Confidence in the data starts with you; if you dont have a handle on high-quality and reliable data, youll carry that burden into your interactions with the product and your colleagues.

So stake your claim as the point-person for data quality and data ownership. You can have input into defining quality and delegating responsibility for fixing different issues. Remove friction between data science and engineering.

If you can lead the charge to define and boost data quality, youll impact almost every other team within your organization. Your teammates will appreciate the work you do to reduce org-wide headaches.

Incomplete or unreliable data can lead to terabytes of wasted data. That data lives in your warehouse, getting included in queries that incur compute costs. Low-quality data can be a major drag on your infrastructure bill as it gets included in the filtering-out process time and again.

Identifying complex data is one way to immediately create value for your organization, especially for pipelines that see heavy traffic for product analytics and machine learning. Recollect, reprocess, or impute and clean existing values to reduce storage and compute costs.

Keep track of the tables and data you clean up, and the number of queries run on those tables. Its essential to notify your team about how many questions are no longer running on junk data and how many gigs of storage are freed up for better things.

All data professionals, seasoned veterans, and newcomers should be indispensable parts of the organization. You add value by taking ownership of more reliable data. Although tools, algorithms, and analytics techniques are growing more sophisticated, often the input data is not its always unique and business-specific. Even the most sophisticated tools and models cant run well on erroneous data. The impact of data science can be a boon to your entire organization through the above five steps. Everyone wins when you improve the data your teams depend upon.

Which techniques can help data scientists and ML engineers streamline the data management process? Tell us on Facebook, Twitter, and LinkedIn. Wed love to know!

Read more:
5 Ways Data Scientists Can Advance Their Careers - Spiceworks News and Insights

Man wins competition with AI-generated artwork and some people aren’t happy – The Register

In brief A man won an art competition with an AI-generated image crafted, and some people aren't best pleased about it.

The image, titled Thtre D'opra Spatial, looks like an impressive painting of an opera scene with performers on stage, and an abstract audience in the background with a huge moon-like window of some sort. It was created by Jason Allen, who went through hundreds of iterations of written descriptions fed into text-to-image generator Midjourey before the software emitted the picture he wanted.

He won first prize, and $300, after he submitted a printed version of the image to the Colorado State Fair's fine art competition. His achievement, however, has raised eyebrows and divided opinions.

"I knew this would be controversial," Allen said in the Midjourney Discord server on Tuesday, according to Vice. "How interesting is it to see how all these people on Twitter who are against AI generated art are the first ones to throw the human under the bus by discrediting the human element! Does this seem hypocritical to you guys?"

Washington Post tech reporter Drew Harwell, who covered the brouhaha here, raised an interesting point: "People once saw photography as cheating, too just pushing a button and now we realize the best creations rely on skilled composition, judgment, and tone," he tweeted.

"Will we one day regard AI art in the same way?"

DeepMind has trained virtual agents to play football the soccer kind using reinforcement learning to control their motor and team work skills.

Football is a fine game to test software's planning skills in a physical domain as it requires bots to learn how to move and coordinate their computer body parts alongside others to achieve a goal. These capabilities will prove useful in the future for real robots and will be a necessary part of artificial general intelligence.

"Football is a great domain to explore this very general problem," DeepMind researchers and co-authors of a paper published in Science Robotics this week told The Register. "It requires planning at the level of skills such as tackling, dribbling or passing, but also longer-term concerns such as clearing the ball or positioning.

"Humans can do this without actively thinking at the level of high frequency motor control or individual muscle movements. We don't know how planning is best organized at such different scales, and achieving this with AI is an active open problem for research."

At first, the humanoids move their limbs in a virtual environment randomly and gradually learn to run, tackle, and score using imitation and reinforcement learning over time.

They were pitted against each other in teams of two. You can see a demonstration in the video below.

Youtube Video

It was only a matter of time before someone went and built a viral text-to-image tool to generate pornographic images.

Stable Diffusion is taking the AI world by storm. The software including the source code, model and its weights has been released publicly, allowing anyone with some level of coding skills to tailor their own system to a specific use case. One developer has built and released Porn Pen to the world, with which users can choose a series of tags, like "babe" or "chubby," to generate a NSFW image.

"I think it's somewhat inevitable that this would come to exist when [OpenAI's] DALL-E did," Os Keyes, a PhD candidate at Seattle University, told TechCrunch. "But it's still depressing how both the options and defaults replicate a very heteronormative and male gaze."

It's unclear how this will affect the sex industry, and many are concerned text-to-image tools could be driven to create deepfakes of someone or pushed to produce illegal content. These systems have sometimes struggled to visualize human anatomy correctly.

People have noticed these ML models adding nipples on random parts of the body or sometimes an extra arm or something is poking out somewhere. All of this is rather creepy.

There's a mobile app that claims it can translate the meaning of a cat's meows into plain English using machine-learning algorithms.

Aptly named MeowTalk, the app analyses recordings of cat noises to predict their mood and interprets what they might be trying to say. It tells owners if their pet felines are happy, resting, or hunting, and may translate this into phrases such as "let me rest" or "hey, I'm so happy to see you," for example.

"We're trying to understand what cats are saying and give them a voice" Javier Sanchez, a founder of MeowTalk, told the New York Times. "We want to use this to help people build better and stronger relationships with their cats," he added. Code using machine learning algorithms to decode and study animal communication, however, isn't always reliable.

MeowTalk doesn't interpret the intent of purring very well, and sometimes the text translation of cat noises are very odd. When a reporter picked up her cat and she meowed, the app apparently thought she told her owner: "Hey baby, let's go somewhere private!"

Stavros Ntalampiras, a computer scientist at the University of Milan, who was called to help the MeowTalk founders, admitted that "a lot of translations are kind of creatively presented to the user," and said "it's not pure science at this stage."

Visit link:
Man wins competition with AI-generated artwork and some people aren't happy - The Register

Artificial intelligence and machine learning now integral to smart power solutions – Times of India

They help to improve efficiency and profitability for utilities.

The utilities space is rapidly transforming today. Its shifting from the conventional and a highly-regulated environment to a tech-driven market at a fast clip. Collating data and optimizing manpower is a constant struggle. The smarter optimization of infrastructure has increased monumentally with the outbreak of the pandemic, and also the dependency on technology. There is an urgent need to balance the supply and demand for which Artificial Intelligence (AI) and Machine Learning (ML) can come into play. Data Science, aided by AI and ML, has been leading to several positive developments in the utilities space. Digitalization can increase the profitability of utilities by significant percentages by utilizing smart meters for grids, digital productivity tools and automating back-office processes. According to a study firms can increase their profitability from 20 percent to 30 percent.

Digital measures rewire organizations to do better through a fundamental reboot of how work gets done.

Customer Service and AI

According to a Gartner report, most AI investments by utilities most often go into customer service solutions. Some 86% of the utilities studied used AI in their digital marketing, towards call center support and customer application. This is testimony to the investments in AI and ML that can deliver a high ROI by improving speed and efficiency, thus enhancing customer experience. The AI thats customer-facing is a low-risk investment as customer enquiries are often repetitive such as billing enquiries, payments, new connections etc. AI can deliver tangible results for business on the customer service front.

Automatic Meters for Energy conservation

As manual entry and billing systems are not only time-consuming, but also susceptible to errors and are expensive too. The Automatic Meter Reading (AMR) System has made a breakthrough. The AMR enables large infrastructure set ups to collect data easily and also analyze the cost centers and the opportunities for improving the efficiencies of natural gas, electric, water sectors and more. It offers real-time billing information for budgeting. It has the advantage of being precise compared to manual entry. Additionally, it is able to store data at distribution points within the networks of the utility. This can be easily accessed over a network using devices like the mobile and handhelds. Energy consumption can be tracked to aid conservation and end energy theft.

Predictive Analytics Enable Smart grid options

By leveraging new-age technologies, utilities can benefit immensely. These technologies in the energy sector help in building smart power grids. The energy sector heavily relies on a complex infrastructure that can face multiple issues as a result of maintenance issues, weather conditions, failure of the system or equipment, demand surges and misallocation of resources. Overloading and congestion leads to a lot of energy being wasted. The grids produce a humongous data which help with risk mitigation when properly utilized. With the large volume of data that continuously pass over the grid, it can be challenging to collect and aggregate it. The operators could miss these insights which could lead to malfunction or outages. With the help of the ML algorithms, the insights can be obtained for smooth functioning of the grids. Automated data management can help maintain the data accurately. With the help of predictive analytics, the operators can predict grid failures before the customers are affected and also create greater customer satisfaction and mitigate any financial loss.

Efficient and Sustainable energy consumption

These allow for better allocation of energy for consumption as it would be based on demand and can save resources and help in load management and forecasting. AI can also deal with issues pertaining to vegetation by analyzing operational data or statistics. This can help to proactively deal with wildfires. Thus, it can become a sustainable and efficient system. To overcome issues pertaining to weather-related maintenance, automation helps receive signals and prioritize the areas that need attention to save money and cut down the downtime. To achieve this, the sector adopts ML capabilities as they need to be able to access automation fast and easily.

The construction sector is also a major beneficiary of the solutions. Building codes and architecture are often a humongous challenges that take a long time to meet. But, some solutions help the builders and developers test these applications seamlessly without any system interruptions. By integrating AI and ML in the data management platforms, the developers enable the data-science teams to spend enough time innovating and much less time on maintenance. With the rise in the computational power and accessibility to the Cloud, the deep learning algorithms are able to train faster while their cost is optimized. AI and ML are able to impact different aspects of business. AI can enhance the quality of human jobs by facilitating remote working. They can help in data collection and analysis and also provide actionable inputs. Data analytics platforms can throw light on the areas of inefficiency and help the providers keep costs down.

Though digital transformation might appear intimidating, its opportunities are much more than the cost and risk associated. Gradually, all utilities will undergo digital transformation as it has begun to take roots in the industrial sectors. This AI-led transformation will improve productivity, revenue gains, make networks more reliable and safe, accelerate customer acquisition, and facilitate entry into new areas of business. Globally, the digital utility market is growing at a CAGR of 11.7% for the period of 2019 to 2027. In 2018, the revenue generated globally for the digital utility market was 141.41 Bn and is expected to reach US$ 381.38 Bn by 2027 according to a study by ResearchAndMarkets.com. As the sector evolves, the advantages of AI and ML will come into play and lead to smarter grids, efficient operations and higher customer satisfaction. The companies that are in a position to take advantage of this opportunity will be ready for the future challenges that could emerge in the market.

Views expressed above are the author's own.

END OF ARTICLE

Read more from the original source:
Artificial intelligence and machine learning now integral to smart power solutions - Times of India

All You Need to Know About Support Vector Machines – Spiceworks News and Insights

A support vector machine (SVM) is defined as a machine learning algorithm that uses supervised learning models to solve complex classification, regression, and outlier detection problems by performing optimal data transformations that determine boundaries between data points based on predefined classes, labels, or outputs. This article explains the fundamentals of SVMs, their working, types, and a few real-world examples.

A support vector machine (SVM) is a machine learning algorithm that uses supervised learning models to solve complex classification, regression, and outlier detection problems by performing optimal data transformations that determine boundaries between data points based on predefined classes, labels, or outputs. SVMs are widely adopted across disciplines such as healthcare, natural language processing, signal processing applications, and speech & image recognition fields.

Technically, the primary objective of the SVM algorithm is to identify a hyperplane that distinguishably segregates the data points of different classes. The hyperplane is localized in such a manner that the largest margin separates the classes under consideration.

The support vector representation is shown in the figure below:

As seen in the above figure, the margin refers to the maximum width of the slice that runs parallel to the hyperplane without any internal support vectors. Such hyperplanes are easier to define for linearly separable problems; however, for real-life problems or scenarios, the SVM algorithm tries to maximize the margin between the support vectors, thereby giving rise to incorrect classifications for smaller sections of data points.

SVMs are potentially designed for binary classification problems. However, with the rise in computationally intensive multiclass problems, several binary classifiers are constructed and combined to formulate SVMs that can implement such multiclass classifications through binary means.

In the mathematical context, an SVM refers to a set of ML algorithms that use kernel methods to transform data features by employing kernel functions. Kernel functions rely on the process of mapping complex datasets to higher dimensions in a manner that makes data point separation easier. The function simplifies the data boundaries for non-linear problems by adding higher dimensions to map complex data points.

While introducing additional dimensions, the data is not entirely transformed as it can act as a computationally taxing process. This technique is usually referred to as the kernel trick, wherein data transformation into higher dimensions is achieved efficiently and inexpensively.

The idea behind the SVM algorithm was first captured in 1963 by Vladimir N. Vapnik and Alexey Ya. Chervonenkis. Since then, SVMs have gained enough popularity as they have continued to have wide-scale implications across several areas, including the protein sorting process, text categorization, facial recognition, autonomous cars, robotic systems, and so on.

See More: What Is a Neural Network? Definition, Working, Types, and Applications in 2022

The working of a support vector machine can be better understood through an example. Lets assume we have red and black labels with the features denoted by x and y. We intend to have a classifier for these tags that classifies data into either the red or black category.

Lets plot the labeled data on an x-y plane, as below:

A typical SVM separates these data points into red and black tags using the hyperplane, which is a two-dimensional line in this case. The hyperplane denotes the decision boundary line, wherein data points fall under the red or black category.

A hyperplane is defined as a line that tends to widen the margins between the two closest tags or labels (red and black). The distance of the hyperplane to the most immediate label is the largest, making the data classification easier.

The above scenario is applicable for linearly separable data. However, for non-linear data, a simple straight line cannot separate the distinct data points.

Heres an example of non-linear complex dataset data:

The above dataset reveals that a single hyperplane is not sufficient to separate the involved labels or tags. However, here, the vectors are visibly distinct, making segregating them easier.

For data classification, you need to add another dimension to the feature space. For linear data discussed until this point, two dimensions of x and y were sufficient. In this case, we add a z-dimension to better classify the data points. Moreover, for convenience, lets use the equation for a circle, z = x + y.

With the third dimension, the slice of feature space along the z-direction looks like this:

Now, with three dimensions, the hyperplane, in this case, runs parallel to the x-direction at a particular value of z; lets consider it as z=1.

The remaining data points are further mapped back to two dimensions.

The above figure reveals the boundary for data points along features x, y, and z along a circle of the circumference with radii of 1 unit that segregates two labels of tags via the SVM.

Lets consider another method of visualizing data points in three dimensions for separating two tags (two different colored tennis balls in this case). Consider the balls lying on a 2D plane surface. Now, if we lift the surface upward, all the tennis balls are distributed in the air. The two differently colored balls may separate in the air at one point in this process. While this occurs, you can use or place the surface between two segregated sets of balls.

In this entire process, the act of lifting the 2D surface refers to the event of mapping data into higher dimensions, which is technically referred to as kernelling, as mentioned earlier. In this way, complex data points can be separated with the help of more dimensions. The concept highlighted here is that the data points continue to get mapped into higher dimensions until a hyperplane is identified that shows a clear separation between the data points.

The figure below gives the 3D visualization of the above use case:

See More: Narrow AI vs. General AI vs. Super AI: Key Comparisons

Support vector machines are broadly classified into two types: simple or linear SVM and kernel or non-linear SVM.

A linear SVM refers to the SVM type used for classifying linearly separable data. This implies that when a dataset can be segregated into categories or classes with the help of a single straight line, it is termed a linear SVM, and the data is referred to as linearly distinct or separable. Moreover, the classifier that classifies such data is termed a linear SVM classifier.

A simple SVM is typically used to address classification and regression analysis problems.

Non-linear data that cannot be segregated into distinct categories with the help of a straight line is classified using a kernel or non-linear SVM. Here, the classifier is referred to as a non-linear classifier. The classification can be performed with a non-linear data type by adding features into higher dimensions rather than relying on 2D space. Here, the newly added features fit a hyperplane that helps easily separate classes or categories.

Kernel SVMs are typically used to handle optimization problems that have multiple variables.

See More: What is Sentiment Analysis? Definition, Tools, and Applications

SVMs rely on supervised learning methods to classify unknown data into known categories. These find applications in diverse fields.

Here, well look at some of the top real-world examples of SVMs:

The geo-sounding problem is one of the widespread use cases for SVMs, wherein the process is employed to track the planets layered structure. This entails solving the inversion problems where the observations or results of the issues are used to factor in the variables or parameters that produced them.

In the process, linear function and support vector algorithmic models separate the electromagnetic data. Moreover, linear programming practices are employed while developing the supervised models in this case. As the problem size is considerably small, the dimension size is inevitably tiny, which accounts for mapping the planets structure.

Soil liquefaction is a significant concern when events such as earthquakes occur. Assessing its potential is crucial while designing any civil infrastructure. SVMs play a key role in determining the occurrence and non-occurrence of such liquefaction aspects. Technically, SVMs handle two tests: SPT (Standard Penetration Test) and CPT (Cone Penetration Test), which use field data to adjudicate the seismic status.

Moreover, SVMs are used to develop models that involve multiple variables, such as soil factors and liquefaction parameters, to determine the ground surface strength. It is believed that SVMs achieve an accuracy of close to 96-97% for such applications.

Protein remote homology is a field of computational biology where proteins are categorized into structural and functional parameters depending on the sequence of amino acids when sequence identification is seemingly difficult. SVMs play a key role in remote homology, with kernel functions determining the commonalities between protein sequences.

Thus, SVMs play a defining role in computational biology.

SVMs are known to solve complex mathematical problems. However, smooth SVMs are preferred for data classification purposes, wherein smoothing techniques that reduce the data outliers and make the pattern identifiable are used.

Thus, for optimization problems, smooth SVMs use algorithms such as the Newton-Armijo algorithm to handle larger datasets that conventional SVMs cannot. Smooth SVM types typically explore math properties such as strong convexity for more straightforward data classification, even with non-linear data.

SVMs classify facial structures vs. non-facial ones. The training data uses two classes of face entity (denoted by +1) and non-face entity (denoted as -1) and n*n pixels to distinguish between face and non-face structures. Further, each pixel is analyzed, and the features from each one are extracted that denote face and non-face characters. Finally, the process creates a square decision boundary around facial structures based on pixel intensity and classifies the resultant images.

Moreover, SVMs are also used for facial expression classification, which includes expressions denoted as happy, sad, angry, surprised, and so on.

In the current scenario, SVMs are used for the classification of images of surfaces. Implying that the images clicked of surfaces can be fed into SVMs to determine the texture of surfaces in those images and classify them as smooth or gritty surfaces.

Text categorization refers to classifying data into predefined categories. For example, news articles contain politics, business, the stock market, or sports. Similarly, one can segregate emails into spam, non-spam, junk, and others.

Technically, each article or document is assigned a score, which is then compared to a predefined threshold value. The article is classified into its respective category depending on the evaluated score.

For handwriting recognition examples, the dataset containing passages that different individuals write is supplied to SVMs. Typically, SVM classifiers are trained with sample data initially and are later used to classify handwriting based on score values. Subsequently, SVMs are also used to segregate writings by humans and computers.

In speech recognition examples, words from speeches are individually picked and separated. Further, for each word, certain features and characteristics are extracted. Feature extraction techniques include Mel Frequency Cepstral Coefficients (MFCC), Linear Prediction Coefficients (LPC), Linear Prediction Cepstral Coefficients (LPCC), and others.

These methods collect audio data, feed it to SVMs and then train the models for speech recognition.

With SVMs, you can determine whether any digital image is tampered with, contaminated, or pure. Such examples are helpful when handling security-related matters for organizations or government agencies, as it is easier to encrypt and embed data as a watermark in high-resolution images.

Such images contain more pixels; hence, it can be challenging to spot hidden or watermarked messages. However, one solution is to separate each pixel and store data in different datasets that SVMs can later analyze.

Medical professionals, researchers, and scientists worldwide have been toiling hard to find a solution that can effectively detect cancer in its early stages. Today, several AI and ML tools are being deployed for the same. For example, in January 2020, Google developed an AI tool that helps in early breast cancer detection and reduces false positives and negatives.

In such examples, SVMs can be employed, wherein cancerous images can be supplied as input. SVM algorithms can analyze them, train the models, and eventually categorize the images that reveal malign or benign cancer features.

See More: What Is a Decision Tree? Algorithms, Template, Examples, and Best Practices

SVMs are crucial while developing applications that involve the implementation of predictive models. SVMs are easy to comprehend and deploy. They offer a sophisticated machine learning algorithm to process linear and non-linear data through kernels.

SVMs find applications in every domain and real-life scenarios where data is handled by adding higher dimensional spaces. This entails considering factors such as the tuning hyper-parameters, selecting the kernel for execution, and investing time and resources in the training phase, which help develop the supervised learning models.

Did this article help you understand the concept of support vector machines? Comment below or let us know on Facebook, Twitter, or LinkedIn. Wed love to hear from you!

Follow this link:
All You Need to Know About Support Vector Machines - Spiceworks News and Insights