Daily Archives: October 4, 2022

Three things you might have missed from the ‘Horizon3.ai Drives Global Partner-First Approach’ event – SiliconANGLE News

Posted: October 4, 2022 at 1:21 pm

For enterprise cybersecurity initiatives to be effective today, they must be continuous and proactive. Organizations simply cant risk a real breach to test their security mettle. But what does it take for cybersecurity strategies to be deemed proactive? Usually, it implies a balanced mix of observability and continuous verification.

Penetration testing has emerged as one way to continuously test the fidelity of networking and data infrastructures by mirroring an actual malicious attack. Horizon3 AI Inc. offers pentesting as a service through its NodeZero platform. NodeZeros growing popularity and appeal across a global user base, in addition to Horizon3s channel-based go-to-market strategy was the focus of a recent livestream event.

Industry analyst John Furrier, host of theCUBE, SilicionANGLE Medias livestreaming studio, hosted the Horizon3.ai Drives Global Partner-First Approach With Expansion of PartnerProgram event. In three separate interviews, Furrier spoke with Horizon3sRainer M. Richter, vice president of EMEA and APAC;Chris Hill, sector head for strategic accounts/federal; andJennifer Lee, head of channel sales, Americas lead. They discussed enterprise use cases and topics on how organizations can maintain agile cybersecurity structures.(* Disclosure below.)

Here are three insights you might have missed:

Data is the enterprises currency, and often its the target or conduit of a malicious attack. With companies constantly ingesting and processing unprecedented swathes of data, such an entry point must be a security priority.This call for better care extends to solutions providers especially, as they are often the direct custodians of multiple customers data. The Horizon3/Splunk partnership perfectly exemplifies this concept, according to Hill.

What weve been able to do with Splunkis build a purpose-built solution that allows Splunkto eat more data, Hill said. So, Splunk itselfis an ingest engine, and the great reason people buy it is to buildthese really fast dashboardsand grab intelligence out of it. With NodeZero,sure we do pentesting,but because were an autonomous pentesting tool,we do it continuously.

In platform partnerships, results are the preeminent measure of value. And, yet again, the Splunk example is handy for determining NodeZeros true enterprise value. Alongside enabling multi-tier users to glean their exposed areas, it has also created visibility to high-impact data logs and enabled asset discovery, according to Hill.

One of the cool things that we can dois actually create this low-code, no-code environment.So Splunk customers, for instance, can use Splunk SOAR to actually triageevents and prioritize that event, he said.

Heres Chris Hills complete video session:

Horizon3 has carved a niche that caters to managed service providers, managed security service providers and consultancy partner ecosystems. That spectrum is much wider, however, as the company is also entrenching itself with resale, systems integrators, technology and cloud partners.

Then weve got our cloud partners.We are in Amazon Web Services Marketplace andwere part of the ISV Accelerate Program, Lee said.So were doing a lot there with our cloud partners.And, of course, we go to marketwith distribution partners as well.

Horizons NodeZero continuous autonomous penetration testing platform offers a certification program, including separate seller and operator portions both of which are offered virtually and at no extra cost to partners, according to Lee.

Its live virtually but not self-paced.And we also have in-person sessions as well.We also can customize these to any partnersthat have a large group of people.And we can do one in-personor virtual just specifically for that partner, Lee added.

Heres Jennifer Lees complete video session:

Horizon3 serves a diverse range of partner sizes, but it appears the smaller-sized early adapters account for a considerable share of the buzz around NodeZero, according to Richter.

They immediately understand where the value isand that they can change their offering, he explained. Theyre changing their offeringin terms of penetration testingbecause they can do more pentestsand they can then add other ones.

From previously having to source pentesting expertsto get the pentest at a particular customer done, they can now do that independently with NodeZero, according to Richter.More importantly, NodeZero isnt thought of as a replacement for the traditional pentesters job, but rather as a tool with which to do pentestings foundational work.

We are providing with NodeZerosomething like the foundational workof having an ongoing penetration testingof the infrastructure and operating system.And the pentesters by themselvescan concentrate in the futureon things like application pentesting, for example.So we are not killing the pentest, Richter stated.

Heres Rainer M. Richters complete video session:

You can also watch the entireHorizon3.ai Drives Global Partner-First Approach event on-demand below, or visittheCUBEs exclusive event website:

(* Disclosure: TheCUBE is a paid media partner for the Horizon3.ai Drives Global Partner-First Approach livestream event. Neither Horizon3 AI, the sponsor of theCUBEs event coverage, nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

More here:

Three things you might have missed from the 'Horizon3.ai Drives Global Partner-First Approach' event - SiliconANGLE News

Posted in Ai | Comments Off on Three things you might have missed from the ‘Horizon3.ai Drives Global Partner-First Approach’ event – SiliconANGLE News

Uniphore Recognized as Major Contender in Conversational AI by Latest Everest Group PEAK Matrix – Business Wire

Posted: at 1:20 pm

PALO ALTO, Calif.--(BUSINESS WIRE)--Uniphore, the leader in Conversational Automation, today announced that Everest Group, a leading strategic research and analyst firm, has recognized its conversational AI and automation platform as a Major Contender in its 2022 Conversational AI Technology Vendor Landscape with Products PEAK Matrix Assessment.

Uniphores flagship conversational AI and automation platform is the industrys only platform that delivers a complete analysis of intent, sentiment, emotion and tone for every contact center conversation. With these robust capabilities, enterprises can transform the complete customer and agent experience. Uniphores platform combines AI, Machine Learning, RPA, Natural Language Processing (NLP), Knowledge AI and more to drive new experiences and maximize the efficiency and cost savings for its customers.

Were honored to be recognized by Everest Group as a Major Contender in the Conversational AI industry. At Uniphore, weve always been focused on the importance and the value of conversations in their entirety, and that shines through in this report, said Umesh Sachdev, CEO and Co-founder at Uniphore. Our platform approach answers the natural evolution of the Conversational AI market, which has for a long time focused solely on self-service offerings opposed to the full spectrum that benefits both customer and agent. We look forward to continuing to lead the way in the market with our unique focus on the critical conversations of todays modern enterprises.

Everest Groups PEAK Matrix Assessment evaluated 26 global Conversational AI vendors on their market impact, as well as their vision and capability to deliver services.

In the report, Uniphores many strengths are highlighted, including:

Uniphore is positioned as a Major Contender in Everest Group's Conversational AI PEAK Matrix. Uniphore offers a detailed conversational AI solution that utilizes advanced capabilities such as sentiment analysis, knowledge AI, RPA integrations, and a low-code/no-code platform, said Sharang Sharma, Practice Director, Everest Group. Uniphores built-in reports and dashboards analyze conversational data to help with strategic planning. It drives real-time contextualized engagement with customers to improve closure rates and operational efficiency. The technology vendors recent acquisitions to strengthen capabilities in its existing native-voice channel and other areas for improved CX will be critical in securing its position in the existing markets, while expanding in fast-growing regions such as APAC and LATAM.

A complimentary copy of the report is available for download here.

About Uniphore

Uniphore is the global leader in Conversational Automation. Every day, billions of conversations take place across industries customer service, sales, HR, education and more. Whether they are human to human, human to machine or machine to machine, conversations are at the heart of everything we do, and the new currency of the enterprise.

At Uniphore, we believe companies that best understand and take action on those conversations will win. We have built the most comprehensive and powerful platform that combines conversational AI, computer vision, emotion and tonal analysis, workflow automation, and RPA (Robotic Process Automation) with a business-user-friendly UX in a single integrated platform to transform and democratize customer experiences across industries.

Follow our blog and connect with us on LinkedIn, Twitter, Facebook, and Instagram.

Go here to read the rest:

Uniphore Recognized as Major Contender in Conversational AI by Latest Everest Group PEAK Matrix - Business Wire

Posted in Ai | Comments Off on Uniphore Recognized as Major Contender in Conversational AI by Latest Everest Group PEAK Matrix – Business Wire

Verint Named to Constellation ShortList for Conversational AI – Business Wire

Posted: at 1:20 pm

MELVILLE, N.Y.--(BUSINESS WIRE)--Verint (Nasdaq: VRNT), The Customer Engagement Company, today announced it was named to the inaugural Constellation ShortList for Conversational AI. The technology vendors and service providers included in this research deliver critical transformation initiative requirements for early adopters and fast-follower organizations.

Supported by a natural language understanding library of over 90,000 intents, Verint Conversational AI goes beyond simple question-and-answer interactions to provide actionable responses across channels including voice, social media channels, and smart speakers. These capabilities are the foundation for Verint Intelligent Virtual Assistant (IVA). Verint IVA can answer questions 24/7 in more than 40 languages, proactively assist customers, provide guided resolution, capture insights, and transfer interactions to live agents.

Today, brands need to provide swift and effortless customer experience on their customers channel of choice. To remain competitive, organizations must put digital-first engagement at the top of their priority lists, says Verints Heather Richards, vice president, GTM strategy, digital first engagement. Through its conversational AI capabilities, the Verint IVA solution delivers personalized, human-like interactions with customers across digital and voice channels.

Constellation considers a number of criteria when choosing solutions for their shortlist. Conversational AI solutions must integrate natural-language-understanding (NLU) capabilities, understand users and personalize conversations for each user, enable a live agent escalation option if and when needed, and provide customizable workflow management, to name a few.

Conversational AI (CAI) has moved away from traditional chatbots to intelligent virtual agents, often matching, or surpassing, the human agents. In many instances, humans can now have intelligent conversations with machines without realizing they are talking to a machine, said Andy Thurai, vice president and principal analyst at Constellation Research. Today's CAI systems are purpose-built for a specific domain and can solve customer problems without the need for human intervention. The combination of sentiment, tone, and emotional intelligence allows them to determine if a customer is upset and prioritizes solving their issue which helps reduce agitation."

Constellation Research advises leaders on leveraging disruptive technologies to achieve business model transformation and streamline business processes. Products and services named to the Constellation ShortList meet the threshold criteria for this category as determined through client inquiries, partner conversations, customer references, vendor selection projects, market share, and internal research. The portfolio is updated at least once per year as the analyst team deems necessary based on market conditions.

Visit Verint Conversational AI to learn more.

Disclaimer: Constellation Research does not endorse any solution or service named in its research.

About Verint

Verint (Nasdaq: VRNT) helps the worlds most iconic brands including over 85 of the Fortune 100 companies build enduring customer relationships by connecting work, data and experiences across the enterprise. The Verint Customer Engagement portfolio draws on the latest advancements in AI and analytics, an open cloud architecture, and The Science of Customer Engagement to help customers close The Engagement Capacity Gap.

Verint. The Customer Engagement Company. Learn more at Verint.com.

This press release contains forward-looking statements, including statements regarding expectations, predictions, views, opportunities, plans, strategies, beliefs, and statements of similar effect relating to Verint Systems Inc. These forward-looking statements are not guarantees of future performance and they are based on management's expectations that involve a number of risks, uncertainties and assumptions, any of which could cause actual results to differ materially from those expressed in or implied by the forward-looking statements. For a detailed discussion of these risk factors, see our Annual Report on Form 10-K for the fiscal year ended January 31, 2022, and other filings we make with the SEC. The forward-looking statements contained in this press release are made as of the date of this press release and, except as required by law, Verint assumes no obligation to update or revise them or to provide reasons why actual results may differ.

VERINT, VERINT DA VINCI, THE CUSTOMER ENGAGEMENT COMPANY, BOUNDLESS CUSTOMER ENGAGEMENT, THE ENGAGEMENT CAPACITY GAP and THE SCIENCE OF CUSTOMER ENGAGEMENT are trademarks of Verint Systems Inc. or its subsidiaries. Verint and other parties may also have trademark rights in other terms used herein.

Read the original here:

Verint Named to Constellation ShortList for Conversational AI - Business Wire

Posted in Ai | Comments Off on Verint Named to Constellation ShortList for Conversational AI – Business Wire

Learning on the edge | MIT News | Massachusetts Institute of Technology – MIT News

Posted: at 1:20 pm

Microcontrollers, miniature computers that can run simple commands, are the basis for billions of connected devices, from internet-of-things (IoT) devices to sensors in automobiles. But cheap, low-power microcontrollers have extremely limited memory and no operating system, making it challenging to train artificial intelligence models on edge devices that work independently from central computing resources.

Training a machine-learning model on an intelligent edge device allows it to adapt to new data and make better predictions. For instance, training a model on a smart keyboard could enable the keyboard to continually learn from the users writing. However, the training process requires so much memory that it is typically done using powerful computers at a data center, before the model is deployed on a device. This is more costly and raises privacy issues since user data must be sent to a central server.

To address this problem, researchers at MIT and the MIT-IBM Watson AI Lab developed a new technique that enables on-device training using less than a quarter of a megabyte of memory. Other training solutions designed for connected devices can use more than 500 megabytes of memory, greatly exceeding the 256-kilobyte capacity of most microcontrollers (there are 1,024 kilobytes in one megabyte).

The intelligent algorithms and framework the researchers developed reduce the amount of computation required to train a model, which makes the process faster and more memory efficient. Their technique can be used to train a machine-learning model on a microcontroller in a matter of minutes.

This technique also preserves privacy by keeping data on the device, which could be especially beneficial when data are sensitive, such as in medical applications. It also could enable customization of a model based on the needs of users. Moreover, the framework preserves or improves the accuracy of the model when compared to other training approaches.

Our study enables IoT devices to not only perform inference but also continuously update the AI models to newly collected data, paving the way for lifelong on-device learning. The low resource utilization makes deep learning more accessible and can have a broader reach, especially for low-power edge devices, says Song Han, an associate professor in the Department of Electrical Engineering and Computer Science (EECS), a member of the MIT-IBM Watson AI Lab, and senior author of the paper describing this innovation.

Joining Han on the paper are co-lead authors and EECS PhD students Ji Lin and Ligeng Zhu, as well as MIT postdocs Wei-Ming Chen and Wei-Chen Wang, and Chuang Gan, a principal research staff member at the MIT-IBM Watson AI Lab. The research will be presented at the Conference on Neural Information Processing Systems.

Han and his team previously addressed the memory and computational bottlenecks that exist when trying to run machine-learning models on tiny edge devices, as part of their TinyML initiative.

Lightweight training

A common type of machine-learning model is known as a neural network. Loosely based on the human brain, these models contain layers of interconnected nodes, or neurons, that process data to complete a task, such as recognizing people in photos. The model must be trained first, which involves showing it millions of examples so it can learn the task. As it learns, the model increases or decreases the strength of the connections between neurons, which are known as weights.

The model may undergo hundreds of updates as it learns, and the intermediate activations must be stored during each round. In a neural network, activation is the middle layers intermediate results. Because there may be millions of weights and activations, training a model requires much more memory than running a pre-trained model, Han explains.

Han and his collaborators employed two algorithmic solutions to make the training process more efficient and less memory-intensive. The first, known as sparse update, uses an algorithm that identifies the most important weights to update at each round of training. The algorithm starts freezing the weights one at a time until it sees the accuracy dip to a set threshold, then it stops. The remaining weights are updated, while the activations corresponding to the frozen weights dont need to be stored in memory.

Updating the whole model is very expensive because there are a lot of activations, so people tend to update only the last layer, but as you can imagine, this hurts the accuracy. For our method, we selectively update those important weights and make sure the accuracy is fully preserved, Han says.

Their second solution involves quantized training and simplifying the weights, which are typically 32 bits. An algorithm rounds the weights so they are only eight bits, through a process known as quantization, which cuts the amount of memory for both training and inference. Inference is the process of applying a model to a dataset and generating a prediction. Then the algorithm applies a technique called quantization-aware scaling (QAS), which acts like a multiplier to adjust the ratio between weight and gradient, to avoid any drop in accuracy that may come from quantized training.

The researchers developed a system, called a tiny training engine, that can run these algorithmic innovations on a simple microcontroller that lacks an operating system. This system changes the order of steps in the training process so more work is completed in the compilation stage, before the model is deployed on the edge device.

We push a lot of the computation, such as auto-differentiation and graph optimization, to compile time. We also aggressively prune the redundant operators to support sparse updates. Once at runtime, we have much less workload to do on the device, Han explains.

A successful speedup

Their optimization only required 157 kilobytes of memory to train a machine-learning model on a microcontroller, whereas other techniques designed for lightweight training would still need between 300 and 600 megabytes.

They tested their framework by training a computer vision model to detect people in images. After only 10 minutes of training, it learned to complete the task successfully. Their method was able to train a model more than 20 times faster than other approaches.

Now that they have demonstrated the success of these techniques for computer vision models, the researchers want to apply them to language models and different types of data, such as time-series data. At the same time, they want to use what theyve learned to shrink the size of larger models without sacrificing accuracy, which could help reduce the carbon footprint of training large-scale machine-learning models.

AI model adaptation/training on a device, especially on embedded controllers, is an open challenge. This research from MIT has not only successfully demonstrated the capabilities, but also opened up new possibilities for privacy-preserving device personalization in real-time, says Nilesh Jain, a principal engineer at Intel who was not involved with this work. Innovations in the publication have broader applicability and will ignite new systems-algorithm co-design research.

On-device learning is the next major advance we are working toward for the connected intelligent edge. Professor Song Hans group has shown great progress in demonstrating the effectiveness of edge devices for training, adds Jilei Hou, vice president and head of AI research at Qualcomm. Qualcomm has awarded his team an Innovation Fellowship for further innovation and advancement in this area.

This work is funded by the National Science Foundation, the MIT-IBM Watson AI Lab, the MIT AI Hardware Program, Amazon, Intel, Qualcomm, Ford Motor Company, and Google.

The rest is here:

Learning on the edge | MIT News | Massachusetts Institute of Technology - MIT News

Posted in Ai | Comments Off on Learning on the edge | MIT News | Massachusetts Institute of Technology – MIT News