Daily Archives: March 8, 2022

What business executives need to know about AI – VentureBeat

Posted: March 8, 2022 at 11:06 pm

Join today's leading executives online at the Data Summit on March 9th. Register here.

Virtually every enterprise decision-maker across the economic spectrum knows by now that artificial intelligence (AI) is the wave of the future. Yes, AI has its challenges and its ultimate contribution to the business model is still largely unknown, but at this point its not a matter of whether to deploy AI but how.

For most of the C-suite, even those running the IT side of the house, AI is still a mystery. The basic idea is simple enough software that can ingest data and make changes in response to that data but the details surrounding its components, implementation, integration and ultimate purpose are a bit more complicated. AI isnt merely a new generation of technology that can be provisioned and deployed to serve a specific function; it represents a fundamental change in the way we interact with the digital universe.

So even as the front office is saying yes to AI projects left and right, it wouldnt hurt to gain a more thorough understanding of the technology to ensure it is being employed productively.

One of the first things busy executives should do is gain a clear understanding of AI terms and the various development paths currently underway, says Mateusz Lach, AI and digital business consultant at Nexocode. After all, its difficult to push AI into the workplace if you dont understand the difference between AI, ML, DL and traditional software. At the same time, you should have a basic working knowledge of the various learning models being employed (reinforcement, supervised, model-based ), as well as ways AI is used (natural language processing, neural networking, predictive analysis, etc.)

With this foundation in hand, it becomes easier to see how the technology can be applied to specific operational challenges. And perhaps most importantly, understanding the role of data in the AI model, and how quality data is of prime importance, will go a long way toward making the right decisions as to where, when and how to employ AI.

It should also help to understand where the significant challenges lie in AI deployment, and what those challenges are. Tech consultant Neil Raden argues that the toughest going lies in the last mile of any given project, where AI must finally prove that it can solve problems and enhance value. This requires the development of effective means of measurement and calibration, preferably with the capability to place results in multiple contexts given that success can be defined in different ways by different groups. Fortunately, the more experience you gain with AI the more you will be able to automate these steps, and this should lessen many of the problems associated with the last mile.

Creating the actual AI models is best left to the line-of-business workers and data scientists who know what needs to be done and how to do it, but its still important for the higher ups to understand some of the key design principles and capabilities that differentiate successful models from failures. Andrew Clark, CTO at AI governance firm Monitaur, says models should be designed around three key principals:

As well, models should exhibit a number of other important qualities, such as reperformance (aka, consistency), interpretability (the ability to be understood by non-experts), and a high degree of deployment maturity, preferably using standard processes and governance rules.

Like any enterprise initiative, the executive view of AI should center on maximizing reward and minimizing risk. A recent article from PwC in the Harvard Business Review highlights some ways this can be done, starting with the creation of a set of ethical principles to act as a north star for AI development and utilization. Equally important is establishing clear lines of ownership over each project, as well as building a detailed review and approval process at multiple stages of the AI lifecycle. But executives should guard against letting these safeguards become stagnant, since both the economic conditions and regulatory requirements governing the use of AI will likely be highly dynamic for some time.

Above all, enterprise executives should strive for flexibility in their AI strategies. Like any business resource, AI must prove itself worthy of trust, which means it should not be released into the data environment until its performance can be assured and even then, never in a way that cannot be undone without painful consequences to the business model.

Yes, the pressure to push AI into production environments is strong and growing stronger, but wiser heads should know that the price of failure can be quite high, not just for the organization but individual careers as well.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More

Read more here:

What business executives need to know about AI - VentureBeat

Posted in Ai | Comments Off on What business executives need to know about AI – VentureBeat

Universities meet to discuss future of AI and data science in agriculture – University of Florida

Posted: at 11:06 pm

Signaling its ongoing commitment to collaboration in the areas of artificial intelligence and data science, the University of Florida is participating in an academic conference to address the potential of artificial intelligence, robotics and automation in agriculture.

The conference, titled Envisioning 2050 in the Southeast: AI-driven Innovations in Agriculture, is hosted March 9-11 by the Auburn University College of Agriculture and funded by the U.S. Department of Agriculture National Institute of Food and Agriculture.

Conference speakers include Hendrik Hamann, a distinguished research staff member and chief scientist for the future of climate in IBM Research; Mark Chaney, engineering manager of the automation delivery teams at Intelligent Solutions Group at John Deere; Steven Thomson, a national program leader with the USDA National Institute Food and Agriculture; and dozens more.

Speakers from academia, the federal government and industry will share their work in areas such as crop production, plant and animal breeding, climate, agricultural extension, pedagogy, food processing and supply chain, livestock management and more.

The Envisioning 2050 in the Southeast: AI-Driven Innovations in Agriculture conference will bring together academics, industry and stakeholders to share their expertise and develop a vision for the future, said Arthur Appel, interim associate dean of research for the Auburn College of Agriculture. Attendees will be able to learn about the depth and breadth of AI in agriculture from the experts who are making the promise of AI a reality.

Kati Migliaccio, co-organizer of the conference and chair of the Department of Agricultural and Biological Engineering at the University of Florida, said the timing of the conference is perfect.

This is an opportune time to host this conference focusing on AI in agriculture in the Southeast because of the resources invested in AI, the state of innovation of AI in agriculture and the critical need to adapt agriculture for current world challenges, including labor, nutrition, energy and climate, she said.

In November, the chief academic officers of the 14 member universities in the Southeastern Conference (SEC) announced formation of an artificial intelligence and data science consortium for workforce development, designed to grow opportunities in the fast-changing fields of AI and data science.

Believed to be the first athletics conference collaboration to have such a focus, the SEC Artificial Intelligence Consortium enables SEC universities to share educational resources, such as curricular materials, certificate and degree program structures, and online presentations of seminars and courses; promote faculty, staff, and student workshops and academic conferences such as todays event at Auburn; and seek joint partnerships with industry.

Joe Glover, provost and senior vice president for academic affairs at the University of Florida, which is leading the SEC-wide effort, said, AI is changing nearly every sector of society, and the SEC is uniquely positioned to engage students, faculty, and staff in one of the most transformational opportunities of our time. The combined strength of our institutions gives us the opportunity to advance in how we process the future of teaching and learning, research and economic development and how we can provide leadership at this critical moment when AI and data science are changing the way we think about small tasks and big questions.

The Auburn University office of communications and marketing and the SEC communications office contributed to this story.

Link:

Universities meet to discuss future of AI and data science in agriculture - University of Florida

Posted in Ai | Comments Off on Universities meet to discuss future of AI and data science in agriculture – University of Florida

Everyone’s Seeking AI Engineers Here’s What They Want – thenewstack.io

Posted: at 11:06 pm

Theres no doubt: Machine learning and artificial intelligence are the hot specialties in IT right now but filling those jobs is proving to be tough.

In a September Gartner survey of over 400 global IT organizations, 64% of IT executives said that a lack of skilled talent was the biggest barrier to adoption of emerging technologies, compared with 4% the previous year.

Companies are looking for employees with specific training, skills and personal traits to fill positions STEM degrees, credentials specific to AI and machine learning, practical hands-on experience, and certain soft skills are all considered when deciding whether to hire a candidate.

The current push to find AI developers and engineers makes the shortage of candidates undeniable, and makes hiring particularly grueling.

In a November survey of over 2,500 human resources and engineering personnel by HackerEarth, a software company that helps organizations with their technical hiring needs, 30% of respondents said theyre expecting to hire more than 100 developers in the coming year.

With goals that ambitious, a significant portion of those hiring managers are so much in need for talent that theyre willing to compromise their standards. Nearly 35% of engineering managers said they would compromise on candidate quality to fill an opening quicker and nearly 24% of HR managers said the same.

According to the survey, AI and ML experts are in high demand this year, with demand exceeding supply.

What we have today is a rich tapestry of interrelated jobs or personas that all go into creating a data science or AI outcome in the enterprise.

Bradley Shimmin, chief analyst for AI platforms, analytics and data management, Omdia

Its a candidates market out there, said Vishwastam Shukla, chief technology officer for HackerEarth.

With companies from all industries looking to hire, larger organizations have the advantage of being able to offer bigger salaries and plusher benefits, he acknowledged. But hes seeing smaller employers and fast-growing startups put up a good fight for candidates.

One of the most popular tactics, Shukla said, is to actually inculcate a culture of learning and development within the organization.

The AI positions companies are looking to fill have become narrower and more specialized.

Job requirements vary wildly depending on a companys size, how mature they are, their data infrastructure and what kind of projects theyre working on, said Bradley Shimmin, chief analyst for AI platforms, analytics and data management at global analyst firm Omdia.

Five years ago, data scientist was considered the hottest job on the planet, and we were talking about data scientists as unicorns in that they possessed a number of very specific skills mathematical, statistical, business and communication, he said.

Companies realized early on that they couldnt operationalize with just a few jack-of-all-trades data scientists.

Trying to scale with them was impossible financially and that, coupled with the creation of MLOps platforms, really spawned a diversification for the job role and a slicing off of aspects of that job, said Shimmin. What we have today is a rich tapestry of interrelated jobs or personas that all go into creating a data science or AI outcome in the enterprise.

The job titles of AI and ML engineers and developers cover a wide variety of tasks and responsibilities, but theres a lot of overlap.

A necessary background for a potential employee starts with programming experience and a college degree.

Companies specifically look for:

And companies are hiring anywhere from basic entry-level positions to more advanced roles.

Were just hiring at all levels, said Valerie Junger, chief people officer at Quantcast, a technology company that focuses on AI-driven real-time advertising.

Machine learning engineers have to be fluent in Java, C++, Python, or similar development languages, and need anything from a masters degree to a Ph.D., depending on the role, she said.

Just having a general computer science degree isnt enough recruiters look for an applicant whos taken specific courses in AI and ML.

In the past, I would check that applicants had a math or STEM background only, said Rosaria Silipo, head of data science evangelism at KNIME, a data-analytics platform company. Now, with the proliferation of college programs and online courses, I check if they have any credentials specific to machine learning or data science.

The requirements for a machine learning engineer have changed, said Omdias Shimmin: All the platform players, Microsoft, Google, Amazon, and others are setting up certification programs.

You dont need to have a Ph.D. you can take whatever time it takes to prove certification as a machine learning engineer, or a data learning engineer, or as a machine learning specialist, and you can put that to work, he said. You can have a bachelors or a masters and still get into this area.

Pursuing specific credentials can lead to better jobs, or to a pay bump in a current position.

According to an October survey of over 3,000 data and AI professionals by learning company OReilly, 64% said they took part in training or obtained skills to build their professional skills, and 61% participated in training or earned certifications to get a salary increase or promotion.

And over a third of those polled dedicated more than 100 hours to training. Those survey participants reported an average salary increase of $11,000.

Entering competitions or hackathons can make a person stand out in a pool of prospective AI/ML candidates who have similar degrees and credentials.

For a candidate, entering hackathons helps potential employees connect with companies and learn a lot about how an organization works.

In the past, I would check that applicants had a math or STEM background only. Now, with the proliferation of college programs and online courses, I check if they have any credentials specific to machine learning or data science.

Rosaria Silipo, head of data science evangelism, KNIME

For an organization looking to hire a lot of people quickly, hackathons can provide a bounty of leads.

Hackathons let you create this warm pool of talent, because a lot of times when you actually go out to hire in the market, you may not be able to source the right kind of candidates with the right skill sets at a short notice, said HackerEarths Shukla.

For entry-level candidates, one of the most direct ways to learn how a company operates is through interning and a company can see if theyre a good fit.

We try to bring on interns who we can get to know before they graduate and they get to experience our culture beforehand, said David Karandish, founder and CEO at enterprise AI software-as-a-service company Capacity.

We really lead with Hey, heres the type of work youre going to do here. And we like people who are excited about the work that they do.

In the DevOps era, teams need to be increasingly cross-functional as businesses and data-driven product development come together. Good communication and collaboration skills are considered as important as a degree or a certification.

AI professionals need to explain complex topics often across multiple time zones, in a remote work setting, and be understood by a wide variety of people with various levels of technical knowledge.

No one person is ever going to know how every single thing works, noted Karandish. So organizations need people who can collaborate and coordinate together, and know when to ask for help or to bring up an important issue.

Its knowing when to ask, are we going down the right path or not, or is there a different approach?

And attitude goes hand-in-hand with collaboration.

Nobody wants to be working with a jerk, he said. They tend to not be collaborative and tend to take credit when credit isnt due. So wed like people with a high-talent-to-low-ego ratio in general.

A wide variety of companies are hiring, and an AI professional needs to understand the specific issues theyre trying to solve for their employer.

They need to have the proper domain knowledge to be able to provide precise recommendations and critically evaluate different work models, said Kamyar Shah, CEO at World Consulting Group.

To design self-running software for businesses and customers, they need to understand both the company and the issues their designs solve for that company, he said.

Problem-solving is another highly valued skill not just understanding what a problem is, but being able to come up with new solutions.

A big aspect of ML and AI is creating playbooks that have not been built before, said Wilson Pang, CTO of data company Appen. A developer needs to have the ability to try new techniques, test and learn, and continually grow through keeping up with industry trends.

Featured image by Alex Knight via Unsplash.

Go here to read the rest:

Everyone's Seeking AI Engineers Here's What They Want - thenewstack.io

Posted in Ai | Comments Off on Everyone’s Seeking AI Engineers Here’s What They Want – thenewstack.io

How Ivanti hopes to redefine cybersecurity with AI – VentureBeat

Posted: at 11:06 pm

Join today's leading executives online at the Data Summit on March 9th. Register here.

Widening gaps in cybersecurity tech stacks are leaving enterprises vulnerable to debilitating attacks. Making matters worse, there are often conflicting endpoints, patch management and patch intelligence systems that partially support a small subset of all devices. CISOs tell VentureBeat that gaps in their cybersecurity tech stacks are getting wider because their legacy systems cant integrate across unified endpoint management (UEM), asset management, IT Service Management (ITSM) and cost management data available in real time to optimize cybersecurity deterrence strategies and spending.

Ivantis quickness in using AI and machine learning to take on these challenges is noteworthy. In the span of fewer than eighteen months, theyve delivered their AI-based Ivanti Neurons platform to enterprise customers and continued to innovate it. The company first introduced the Ivanti Neurons platform in July 2020, empowering organizations to autonomously self-heal and self-secure devices and self-service end users.

Since then, Ivanti has released updates and added innovations to the platform on a quarterly basis to further help customers quickly and securely embrace the future of work. For example, Ivanti recently released Ivanti Neurons for Zero Trust Access, the first AI-based solution to support organizations fine-tuning their zero trust frameworks. The company also introduced Ivanti Neurons for Patch Management, a cloud-native solution that enables IT teams to efficiently prioritize and remediate the vulnerabilities that pose the most danger to their organizations.

In the same period, Ivanti acquired MobileIron, Pulse Secure, Cherwell, RiskSense, and the Industrial Internet of Things (IIoT) platform owned by theWIIOGroup. Their total addressable market has doubled due to these acquisitions, reaching $30 billion this year, growing to $60 billion by 2025. Ivanti has 45,000 customers, providing cybersecurity systems and platforms for 96 of the Fortune 100.

Ivanti is successfully scaling its AI-based Neurons platform across multiple gaps in enterprises cybersecurity tech stacks. VentureBeat recently spoke with Ivantis CEO, Jeff Abbott, and president and chief product officer Nayaki Nayyar to gain further insight on Ivantis growth and success . The companys executives detailed how Ivantis approach to integrating AI and machine learning into its Neurons platform will help its customers anticipate, deter and learn from a wide variety of cyberattacks.

VentureBeat: Why do new customers choose an AI-based solution like Ivanti Neurons over the competing, substitute solutions in the market?

Jeff Abbott: Were looking to AI, machine learning, and related technologies to create a richer experience for our customers while continually delivering innovative and valuable new capabilities. Were leveraging AI & machine learning bot technology to solve common challenges that our customers are facing. The example I like is discovery. The process of understanding whats on a network. I talk to customers all the time, and one that comes to mind is a superintendent of a school district who said, Every six months we send out teams to go to all the various locations of various schools and see whats on the network physically or we run protocols on site. Now with your bot technology, we can do that on a nightly basis and discover whats there. Thats an example of how our unified platform increases visibility for our customers, while continually staying on top of security standards.

Its fascinating to consider all the opportunities the metadata from UEM, IT service management (ITSM) / IT asset management (ITAM), and cost management systems provide. Having the metadata from all three systems on a single pane of glass becomes very interesting to what we can tell customers about their operations down to the device level. Creating a data lake based on the metadata becomes a powerful tool. Having a broad base of contextual data to analyze with the Ivanti Neurons platform enables us to gain a new understanding of whats happening. Were relying on AI and machine learning in the context of the Ivanti Neurons platform to scale from providing basic information up to contextually intelligent insights our customers can use to grow their businesses.

Nayaki Nayyar: I was in the oil and gas industry for 15 years, working with Shell and Valero Energy for many years. So, Ive lived in the customers shoes and can empathize with three big problems theyre facing today, regardless of the industry they are in

The first is the explosive growth of edge devices, including mobile devices, laptops, desktops, wearables and, to some extent, IoT devices. Thats a big challenge that everyone has to address. Then the second problem is ransomware. Not a single day goes by without a ransomware attack. And the third is how to provide a great customer experience that equals the quality of everyday consumer experiences. Solving how to bring a consumer-grade experience into an enterprise context is an area were prioritizing today.

Our goal is to automate tasks beneath the user experience layer of our applications, so our customers dont have to worry about them; let AI, machine learning, and deep learning capabilities heal endpoints, using intelligent bots for endpoint discovery, self-healing, asset management and more. Our goal is to provide customers with an experience where the routine tasks are managed autonomously, so they dont have to. The Ivanti Neurons platform is designed to take on these challenges and more.

VentureBeat: How are you fine-tuning your algorithms to fight ransomware so that your customers dont have to become data scientists or consider recruiting a data scientist?

Nayaki Nayyar: I will highlight two distinct AI capabilities that we have to address your exact question on preventing ransomware. We have what we call Ivanti Neurons for Edge Intelligence, which provides a 360-degree view of all the devices across a network, and using NLP, weve designed the platform so its flexible enough to respond to questions and queries. An example would be, How many devices on my network are not patched correctly or have not been patched for these specific vulnerabilities? The Ivanti Neurons platform will automatically respond to simple text-based and keyword searches. So, our customers can ask a question using natural language, and the system will respond to it.

Weve also developed deep expertise in text ranking. We mine data from various social channels, including Twitter, Reddit, and publicly available sources. We then do sentiment analysis on various Common Vulnerabilities and Exposures (CVEs) that are trending and sentiment analysis on the patches. Then we provide those insights in Ivanti Neurons for Patch Intelligence. Using NLP, sentiment analysis, and AI, Ivanti Neurons for Patch Intelligence provides our customers administrators with the insights they need to prioritize which CVEs have the highest risks for their organization and then remediate those issues immediately. That doesnt require data scientists to be employed by our customers. All of that is being embedded into our stack, and we make it simple for customers to consume it.

Jeff Abbott: Were also constantly doing research on ransomware and vulnerabilities. In fact, we just released our Ransomware Spotlight Year-End Report. The analysis shows that the bad actors target organizations that are not keeping up with CVEs.

Not keeping up with zero-day vulnerabilities and defining a plan for addressing them can make any organization a gazelle in the middle of the field. So, as Nayaki said, were providing patch intelligence to help our customers prioritize which vulnerabilities are most important to address first. One of the factors that led to us acquiring RiskSense is their extensive data set on detection. Were using the data to provide forward intelligence on the open vulnerabilities and help our customers anticipate and fix them quickly. Were seeing that our mid-tier and SMB accounts need patch intelligence as much as our enterprise customers.

VentureBeat: How does AI deliver measurable value for customers? How do you quantify that and know you are meeting expectations with customers, that youre delivering value?

Nayaki Nayyar: For many years, solving security, IT or asset issues was a reactive process. Every customer called or filed a ticket right after the issue happened, reporting the issue. The ticket was created, then it was routed to the right service desk agent to solve it. But that took too much time, possibly ten days later or even a month later, before the ticket was resolved.

The Ivanti Neurons platform is designed to detect security, IT, asset, endpoint, or discovery issues before the end-user knows that issue will happen. Our bots are also designed to be self-healing and they can detect whether its a configuration drift that has happened on a device, or whether it is a security anomaly or a performance issue. Bots automatically heal those issues, so end users dont even have to create a ticket and route the ticket to get a resolution.

If we can help customers reduce the number of issues by 30% or more before end users even create tickets, then that represents a massive cost saving. Not to mention the speed and accuracy at which those services are provided.

VentureBeat: Which customer needs are the most urgent and best met by expanding the AI capabilities of your Ivanti Neurons platform?

Nayaki Nayyar: Today, discovering unknown assets or endpoints is an urgent, high-priority requirement. The greatest challenge is blind-spot detection within an organization. Weve architected Ivanti Neurons to detect blind spots across enterprise networks. Our customers are using Neurons to identify assets regardless of their locations, whether they are in data centers, cloud assets, endpoints, or IoT assets.

Discovery is most often step one for our customers on the Ivanti Neurons platform because it helps them turn their unknown assets into known assets immediately. They dont need to remediate and self-heal devices right away; that can come later in the asset cycle. Ivanti Neurons for Discovery are a critically important solution that customers get immediate benefit from and then can expand upon.

Most customers have what we call a Frankensteins mess of tools and technologies to manage their devices By combining our Neurons platform with the technologies from our recently acquired companies, were now providing a single pane of glass, so an analyst can log in, see what device types are on the network, and manage any endpoint security or asset management problems right from there.

Jeff Abbott: Patching is overly complex and time-consuming, and thats a huge problem our customers also face. Ivanti Neurons for Patch Management and Patch Intelligence help solve those challenges for our customers. Were focused on improving user experiences to make AI and NLP-based patch management and intelligence less intimidating. Our focus is specifically on helping our customers keep up with the latest zero-day vulnerabilities and CVEs that could impact them. We focus on solving the biggest risk areas first using Ivanti Neurons, alleviating the time-consuming work our customers would otherwise have to go through.

VentureBeat: What are the Ivanti Neurons platforms top three design goals, and how do you benchmark success for those?

Jeff Abbott: Our primary goals are for the Ivanti Neurons platform to discover devices, and then self-heal and self-secure themselves using AI-based workflows and technologies. Our internal research shows that customers using Neurons are experiencing over 50% reductions in support call times. Theyre also eliminating duplicate work between IT operations and security teams and reducing the number of vulnerable devices by 50%. These stats are all from customer surveys and anonymized actual results. Ivanti Neurons is also contributing to reducing unplanned outages by 63%.

Nayaki Nayyar: Adding to what Jeff said, the entire architecture is container-based. We leverage containers that are cloud-agnostic, meaning we can deploy them anywhere. So, one goal is not just to deploy to the cloud, but also to drop these containers on the edge in the future so that we can process those workloads at the edge, closer to where the data is getting generated.

The platform is also all API-based, so the integration we do within the stack is all based on APIs, This means that our customers dont need to have the entire stack. They can start anywhere and evolve at their own pace. They can start in the security space in patch management and move from there. Or they can start in service management or discovery. They can start anywhere and evolve everywhere. And we also recognize that they dont need to have just Ivantis entire stack. They can be using two or three pillars from us and other systems and platforms from other vendors.

VentureBeat: Do you see customers moving to an AI-based platform to scale zero trust initiatives further out?

Nayaki Nayyar: Yes, we have a large manufacturing customer who was evolving from VPN-based access into zero trust. This is a big paradigm shift. With VPN-based access, youre pretty much giving users access to everything, whereas, with a zero-trust approach, youre continuously validating and authenticating every application access. As the customer was switching to zero trust, their employees were running into many access denied issues. The volume of tickets coming into the service deck spiked by 500%.

The manufacturing customer started using Ivanti Neurons with AI and ML-based bots to detect what kind of access issues users were having and self-heal those issues based on the right amount of access. The ticket volume immediately went down. So, it was a great example of customers evolving beyond VPN to zero trust access; our technology can help customers advance zero-trust and solve access challenges.

VentureBeat: What additional verticals are you looking at beyond healthcare? For example, will there be an Ivanti Neurons for Supply Chain Management, given how many constraints they have become in the last year to eighteen months, for example?

Nayaki Nayyar: Im extremely passionate about IoT and whats happening with edge devices today. The transformation that we see at the edge is phenomenal. Were designing support for edge devices into the Ivanti Neurons platform today, giving our customers the flexibility of managing IoT assets.

Healthcare is one of the verticals where we have gone deep into discovering and managing our customers many healthcare devices, especially those you see in a hospital setting like Kaiser.

Manufacturing facilities or shop floor is another area we are exploring. Our customers have different types of ruggedized IoT devices that we can apply the same principles of discovering, managing, and providing security to the IoT assets on the shop floor. In the future, we also plan on extending into the telco space. We have large telcos as customers, and theyve been asking us to go more and more into the telco IoT world.

Our telco customers also tell us they would like to see greater support for ruggedized devices their field technicians use out in the field. Retailers are also expressing an interest in supporting ruggedized devices, which is an area were exploring today.

Jeff Abbott: The public sector comprising federal, state, and local have unique requirements, of which Nayaki and I have had several conversations about. Many capabilities for vertical markets are still very horizontal. Were seeing that as organizations discover the nuances of their use of edge computing and edge technology, more specialized vertical market requirements will become more dominant. I think were covering 90% or more of the security requirements now. Thats especially the case in discovery, patch management, and patch intelligence.

VentureBeat: How do you integrate an AI-based platform into a legacy system tech stack or infrastructure? What are the most valuable technologies for accomplishing that, for example, APIs?

Nayaki Nayyar: We have a pretty strong connector base with existing systems. I wont call them a legacy. We need to coexist with existing systems, as many have been installed for 10 to 15 years at a minimum in many organizations. To accomplish this, we have 300 or more connectors out of the box that can be leveraged by our customers, resellers, and partners. Were committed to continually strengthening our ecosystem of partners to provide customers with the options they need for their unique integration requirements.

VentureBeat: Could you share the top three lessons Ivanti has learned, designing intuitive user experiences to guide users using AI-based applications?

Jeff Abbott: I think the most important lesson learned is to provide every customer, from SMBs to enterprises, data-driven insights that validate AI is performing appropriately. Ensuring that self-healing, self-servicing, and all supporting aspects of Ivanti Neurons protect customers assets while also contributing to more efficient ITSM performances.

When it comes to preventing ransomware attacks, the key is to always provide users with the option of performing an intuitive double-check. One day your organization could be very healthy. But, on the other hand, you may not be paying attention to the intuitive signals from AI, which could lead to the organization falling victim to an attack. Taking an active position on security, which includes knowing your organizations tools and understanding what they can achieve, is important.

Nayaki Nayyar: User experiences require a three-prong approach. Start by concentrating first with humans in the loop, recognizing the unique need for contextual intelligence. Next, add the need for augmented AI, and then the last level of maturity is humans out of the loop.

For customers, this translates into taking the three layers of maturity and identifying how and where user experience designs deliver more contextual intelligence. The goal with Ivanti Neurons is to remove as many extraneous interactions with users as possible, saving their time only for the most unique, complex decision trade-offs that need to be made. Our goal is to streamline routine processes, anticipate potential endpoint security, patch management, and ITSM-related tasks, and handle them before a user sees their impact on productivity and getting work done.

VentureBeat: With machine learning models so dependent on repetitive learning, how did you design the Ivanti Neurons platform and related AI applications to continually learn from data without requiring customers to have data scientists on staff?

Nayaki Nayyar: Were focused on making Ivanti Neurons as accessible as possible to every user. Weve created an Employee Experience Score, a methodology to identify how effective our customers experiences are on our platform to achieve that. Using that data, we can tell which application workflows need the most work to further improve usability and user experiences and which ones are doing so well that we can use them as models for future development.

Were finding this approach to be very effective in quantifying [whether] were meeting expectations or not by individual, employee, division, department, and persona. This approach immediately gets organizations out of using ticket counts as a proxy for user experience. Closing tickets alone is not the SLA that needs to be measured alone. Its more important to quantify the entire experience and seek new ways to improve it.

VentureBeat: How do you evaluate potential acquisitions, given how your product and services strategy moves in an AI-centric direction? What matters most in potential acquisitions?

Jeff Abbott: Were prioritizing smaller acquisitions that deliver high levels of differentiation via their unique technologies first, followed by their potential contributions to our total addressable markets. Were considering potential acquisitions that could strengthen our vertical tech stack in key markets. Were also getting good feedback directly from customers and our partners on where we should look for new acquisitions. But Id like to be clear that its not just acquisitions.

We also have very interesting partnerships forming across industries, focusing on telco carriers globally. Some of the large hardware providers have also proposed interesting joint go-to-market strategies, which we think will be groundbreaking with the platform. Were also looking at partnerships that create network effects across our partnership and customer base. Thats what were after in our partnership strategy, especially regarding the interest were seeing on the part of large telco providers today. So, were going to be selective. We will go after those that put us in a differentiation category. The good news is that many nice innovative companies are getting into that level of maturity.

Where we can partner or acquire them, were focused on not disrupting the trajectory theyre on. It creates a much bigger investment portfolio to continue to advance those solutions.

Nayaki Nayyar: Were very deliberate in what acquisitions we do for two primary reasons. One is to strengthen the markets that we play in. We compete in three markets today, and our recent acquisitions strengthen our position in each. Our goal is to be among the top two or top three in each market were competing in. An integral part of our acquisition strategy is looking at how a potential acquisition can increase our entire addressable market and gain access to adjacent markets that we can start to grow in.

We are in three markets: UEM, security, and service management. As were converging these three pillars into our Ivanti Neurons platform, we are evolving into adjacent markets like DEX (Digital Experience Management) So far, our approach of relying on acquisitions to strengthen our core three markets is working well for us. To Jeffs point, strengthening what we have to further to be a top vendor in these markets is working, delivering strong, differentiated value to our customers.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More

Continued here:

How Ivanti hopes to redefine cybersecurity with AI - VentureBeat

Posted in Ai | Comments Off on How Ivanti hopes to redefine cybersecurity with AI – VentureBeat

Disrupting Product Designing with the Much-Needed AI Makeover – Analytics Insight

Posted: at 11:06 pm

The product designing sector is creating new opportunities as it receives an AI transformation

Over the last couple of years, AI has been making great strides across distinct global industries. AI technologies are integrated with businesses for promoting efficiency and developing automation to carry out rigorous tasks and other activities that relate to stocks, finance, marketing, healthcare, and that too, using smart devices. Tech experts believe that between the years 2040 to 2050, AI will be capable of performing intellectual tasks that were traditionally only carried out by humans. Currently, artificial intelligence is creating a myriad of opportunities for businesses and global industries to achieve the best standards of quality while delivering consumer products or services. Similarly, AI has also transformed product designing and development to quite an extent. AI can help solve issues in the field of creativity and solve complex problems. When it comes to product designing and development, the role of AI cannot be ignored. Integrating AI in product designing and development has entirely transformed the dynamics of the relationship between businesses and consumers.

From startups to large enterprises, everyone is racing to get their new products launched, and AI and machine learning are making robust contributions to this process. Rapid advances in AI-based applications, products, services, and platforms have also driven the consolidation of the IoT market. The IoT platforms provide concentrated solutions for business challenges in vertical markets that stand the best chance of surviving the upcoming IoT advancements. Since AI and ML are getting ingrained in product designing, more related platforms and products now need to adapt to the upcoming circumstances. An advanced and efficient product design company will integrate robust IoT products and AI services to ensure that the customers enjoy the best quality products and the enterprise teams seamlessly retain more customers.

Without sounding too futuristic or advanced, it is quite safe to say that AI will most likely outrun human resources in terms of intellectual activity processing in the near future. One of the many critical aspects of the phenomenon of AI interfering in human lives and experiences is that it has enhanced customer expectations for high-quality products. As a result, the global market is drenched in competitiveness as enterprises strive towards better products with superior design and enhanced performance standards. And slowly, it has turned into a mandate instead of being just a necessity.

Major tech companies are using AI to reduce energy and financial costs while designing and developing new products. AI can also be helpful to human creative leads in product designing by taking the mundane and rigorous tasks off their hands. The technology is meant to help professionals to have an easier, more fulfilling life but not take it away. Furthermore, one of the toughest phases of new product development is designing its user experience. To ensure the success of a product, it is crucial for enterprise leaders to ensure that the product is relatable enough for the users. The design process requires massive amounts of creative muscle, especially when the team must intricately think about how the product will be used, validate those ideas, and create something out of the box. Using an AI-driven brainstorming tool for such purposes might serve as a useful tool.

Besides this, AI is efficiently capable of letting the team know beforehand during the design phase whether or not a specific design will be successful or is deemed to fail. The system can explore the proposed user flow and determine whether a user can complete the desired action or not. This saves the company from having to build multiple iterations of a product for testing.

AI is here to stay. It does not matter if the industry is tech-based or otherwise, AI-driven systems and algorithms will create groundbreaking innovations and lead global enterprises towards immense profitability to help businesses reach new highs.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Analytics Insight is an influential platform dedicated to insights, trends, and opinions from the world of data-driven technologies. It monitors developments, recognition, and achievements made by Artificial Intelligence, Big Data and Analytics companies across the globe.

Link:

Disrupting Product Designing with the Much-Needed AI Makeover - Analytics Insight

Posted in Ai | Comments Off on Disrupting Product Designing with the Much-Needed AI Makeover – Analytics Insight

1 Artificial Intelligence Growth Stock to Buy Now and Hold for the Long Term – The Motley Fool

Posted: at 11:06 pm

Artificial intelligence (AI) promises to be one of the most transformative technologies of our time. It has already proven it can reliably complete complex tasks almost instantaneously, eliminating the need for days or even weeks of human input in many cases.

The challenge for companies developing this advanced technology is building a business model that can deliver it efficiently since AI is a brand-new industry with little existing precedent. That's what makes C3.ai ( AI -1.96% ) a trailblazer, as it's the first platform AI provider helping companies in almost any industry access the technology's benefits.

C3.ai just reported its fiscal 2022 third-quarter earnings result, and it revealed continued growth across key metrics, further cementing the case for owning its stock for the long run.

Image source: Getty Images.

As more of the economy transitions into the digital realm, a growing number of companies will find themselves with access to game-changing tech like artificial intelligence. In the second quarter of fiscal 2022, C3.ai said it was serving 14 different industries, double the amount from the corresponding quarter in the previous year. It indicates that more sectors are already proactively seeking the benefits of AI.

One of those sectors is oil and gas, which represents the largest portion of C3.ai's total revenue. The company has a long-standing partnership with oil giant Baker Hughes. Together, the two companies have developed a suite of AI applications to predict critical equipment failures and reduce carbon emissions in drilling and production operations.

Shellis a core customer of these applications, and it's using them to monitor 10,000 devices and 23 large-scale oil assets, with the technology processing 1.3 trillion predictions per month.

In the recent Q3 of fiscal 2022, C3.ai revealed a new partnership with the U.S. Department of Defense worth $500 million over the next five years. It's designed to accelerate the adoption of AI applications across the defense segment of the federal government.

But some of C3.ai's most impressive partnerships are those with tech behemoths like Microsoft and Alphabet's Google. They're collaborating with C3.ai to deploy AI applications in the cloud to better serve their customers in manufacturing, healthcare, and financial services, among other industries.

From the moment a potential customer engages C3.ai, it can take up to six months to deploy their AI application. Therefore, it's important to watch the company's customer count as it can be a leading indicator for revenue growth in the future.

In fiscal Q3 2022, C3.ai reported having 218 customers, which was an 81% jump over Q3 2021. Over the same period, remaining performance obligations (which are expected to convert to revenue in the future) climbed by 90% to $469 million.

Since quarterly revenue grew a more modest 42% in the same time span, both of the above metrics hint at a potential revenue-growth acceleration over the next few years. The company has also raised its sales guidance twice so far in the first nine months of fiscal 2022, albeit by just 2% in total, now estimating $252 million in full-year revenue.

C3.ai has been a publicly traded company for a little over a year, listing in December 2020. It quickly rallied to its all-time high stock price of $161 before enduring a painful 87% decline to the $20 it trades at today. The company hasn't grown as quickly as investors anticipated, and it also hasn't achieved profitability yet.

But right now, C3.ai trades at a market valuation of $2.1 billion, and it has over $1 billion in cash and short-term investments on its balance sheet. Put simply, investors are only attributing a value of around $1 billion to its AI business despite over $250 million in revenue expected by the close of fiscal 2022 and a portfolio of A-list customers.

Moreover, C3.ai has a gross profit margin of 80%, affording it plenty of optionality when it comes to managing expenses. This places it in a great position to eventually deliver positive earnings per share to investors once it achieves a sufficient level of scale.

While C3.ai stock carries some risk, especially in the middle of the current tech sell-off, by many accounts it's beginning to look like an attractive long-term bet. Advanced technologies like AI will only grow in demand over time, and this company is a great way to play that trend.

This article represents the opinion of the writer, who may disagree with the official recommendation position of a Motley Fool premium advisory service. Were motley! Questioning an investing thesis even one of our own helps us all think critically about investing and make decisions that help us become smarter, happier, and richer.

See the article here:

1 Artificial Intelligence Growth Stock to Buy Now and Hold for the Long Term - The Motley Fool

Posted in Ai | Comments Off on 1 Artificial Intelligence Growth Stock to Buy Now and Hold for the Long Term – The Motley Fool

AlphaFold, GPT-3 and How to Augment Intelligence with AI (Pt. 2) – A16Z Future

Posted: at 11:06 pm

As we saw in Part 1, its possible to get started on a number of important problems by building augmentation infrastructure around the strengths of an artificial intelligence model. Does the model generate text? Build around text. Can it accurately predict 3D structures? Build around 3D structures. But taking an artificial intelligence system completely at face value comes with its own limitations.

Douglas Engelbart used the term co-evolution to describe the way in which humanitys tools and its processes for using those tools adapt and evolve together. Models like GPT-3 and DALL-E represent a large step in the evolution of tools, but its only one half of the equation. When you build around the model, without also building new tools and processes for the model, youre stuck with what you get. The models weaknesses become your weaknesses. If you dont like the result, its up to you to fix it. And since training any of the large, complex AI systems weve discussed so far requires massive data and computation, you likely dont have the resources to change the model all that much.

This is a bit of a conundrum: On the one hand, we dont have the resources to change the model significantly. On the other hand, we need to change the model, or at least come up with better ways of working with it, to solve for our specific use case. For prompt-based models like GPT-3 and DALL-E, the two easiest ways to tackle this fixed-model conundrum are prompt-hacking and fine-tuning neither of which are particularly efficient:

The goal of augmented intelligence is to make manual processes like these more efficient so humans can spend more time on the things they are good at, like reasoning and strategizing. The inefficiency of prompt-hacking and fine-tuning show that the time is ripe for a reciprocal step in process evolution. So, in this section, well explore some examples of a new theme building for the model and the role it plays in creating more effective augmentation tools.

As a working example, lets say youre an up-and-coming game developer working on the next online gaming franchise. Youve seen how games like Call of Duty and Fortnite have created massively successful (and lucrative) marketplaces for custom skins and in-game assets, but youre a resource-constrained startup. So, instead of developing these assets yourself, you offload content generation to DALL-E, which can generate any number of skins and asset styles for a fraction of the cost. This is a great start, but prompt-hacking your way to a fully stocked asset store is inefficient.

To make things less manual, you can turn prompting over to a text generation model like GPT-3. The key to the virality of a game like Fortnite is the combination of a number of key game assets weapons, vehicles, armor with a variety of unique styles and references, such as eye-catching patterns/colors, superheroes, and the latest pop culture trends. When you seed GPT-3 with your asset types, it can generate any number of these combinations into a prompt. Pass that prompt over to DALL-E, and out comes your skin design.

This GPT-3 to DALL-E handoff sounds great, but it only really works if it produces stimulating, high-quality skin designs for your users. Combing through each of the design candidates manually is not an option, especially at scale. The key here is to build tools that let the marketplace do the work for you. Users flock to good content and have no patience for bad content apps like TikTok are based entirely on this concept. User engagement will therefore be a strong signal for which DALL-E prompts are working (i.e., leading to interesting skin designs) and which are not.

To let your users do the work for you, youll want to build a recursive loop that cross-references user activity with each prompt and translates user engagement metrics into a ranking of your active content prompts. Once you have that, normal A/B testing will automatically surface prompt insights and you can prioritize good prompts, remove bad prompts, and even compare the similarity of newly generated prompts to those you have tested before.

But thats not all the same user engagement signal can also be used for fine-tuning.

Lets move one more step backward and focus on GPT-3s performance. As long as you keep track of the inputs you are giving to GPT-3 (asset types + candidate themes), you can join that data with the quality rankings you have just gotten from further down in your content pipeline to create a dataset of successful and unsuccessful input-output pairs. This dataset can be used to fine-tune GPT-3 on game-design-focused prompt generation, making it even better at generating prompts for your application.

This user-driven cyclical pipeline helps DALL-E generate better content for your users by surfacing the best prompts, and helps GPT-3 generate better prompts by fine-tuning on examples generated from your own user activity. Without having to worry about prompt-hacking and fine-tuning, you are free to work on bigger-ticket items, like which assets are next in the pipeline, and which new content themes might lead to even more interesting skins down the road.

There also exists a huge opportunity to build middleware connecting creative industries and creative, personalized content-generating models.AI models and the services they enable (e.g. Copilot) could help for use cases that require novel content creation. This, again, requires using our understanding of the AI system and how it works to think of ways in which we can modify its behavior ever so slightly to create new and better experiences.

Imagine you are building a service for learning to code that uses Copilot under the hood to generate programming exercises. Out of the box, Copilot will generate anywhere from a single line of code to a whole function, depending on the docstring its given as input. This is great you can construct a bunch of exercises really quickly!

To make this educational experience more engaging, though, youll probably want to tailor the exercises generated by Copilot to the needs and interests of your users. For example, you might want to personalize across dimensions such as:

Generating docstrings yourself is tedious and manual, so personalizing Copilots outputs should be as automated as possible. Well, we know of another AI system, GPT-3, that is great at generating virtually any type of text so maybe we can offload the docstring creation to GPT-3.

This can be done in one of two ways. One approach is to ask GPT-3 to generate generic docstrings that correspond to a particular skill or concept (e.g., looping, recursion, etc.). With one prompt, you can generate any number of boilerplate docstrings. Then, using a curated list of target themes and keywords (a slight manual effort), you can replace variable names in the boilerplate to your target audience. Alternatively, you can try feeding both target skills/concepts and themes to GPT-3 at the same time and let GPT-3 tailor the docstrings to your themes automatically.

The success of this idea, of course, comes down to the quality of GPT-3s content. For one, youll want to make sure the exercises generated by this GPT/Copilot combination are age-appropriate. Perhaps an aligned model like InstructGPT would be better here.

We are now over a decade into the latest AI summer. The flurry of activity in the AI community has led to incredible breakthroughs that will have significant impact across a number of industries and, possibly, on the trajectory of humanity as a whole. Augmented intelligence represents an opportunity to kickstart this progress, and all it takes is a slight reframing of our design principles for building AI systems. In addition to building models to solve problems, we can think of new ways to build infrastructure around models and for models; and even ways in which foundation models might work together (like GPT-3 and DALL-E or GPT-3 + CoPilot).

Maybe one day we will be able to offload all of the dirty work of life to some artificial general intelligence and live hakuna-matata style, but until that day comes we should think of Engelbart focusing less on machines that replace human intelligence and more about those that are savvy enough to enhance it.

Posted March 8, 2022

Technology, innovation, and the future, as told by those building it.

Check your inbox for a welcome note.

Go here to read the rest:

AlphaFold, GPT-3 and How to Augment Intelligence with AI (Pt. 2) - A16Z Future

Posted in Ai | Comments Off on AlphaFold, GPT-3 and How to Augment Intelligence with AI (Pt. 2) – A16Z Future

Juniper Networks Announces University Research Funding Initiative to Advance Artificial Intelligence and Network Innovation – Yahoo Finance

Posted: at 11:06 pm

Research to focus on AI, ML, Routing and Quantum Networking for advanced communication

SUNNYVALE, Calif., March 08, 2022--(BUSINESS WIRE)--Juniper Networks (NYSE: JNPR), a leader in secure, AI-driven networks, today announced a university funding initiative to fuel strategic research to advance network technologies for the next decade. Junipers goal is to enable universities, including Dartmouth, Purdue, Stanford and the University of Arizona, to explore next-generation network solutions in the fields of artificial intelligence (AI) and machine learning (ML), intelligent multipath routing and quantum communications.

Investing now in these technologies, as organizations encounter new levels of complexity across enterprise, cloud and 5G networks, is critical to replace tedious, manual operations as networks become mission critical for nearly every business. This can be done through automated, closed-loop workflows that use AI and ML-driven operations to scale and cope with the exponential growth of new cloud-based services and applications.

The universities Juniper selected in support of this initiative are now beginning the research that, once completed, will be shared with the networking community. In addition, Juniper joined the Center for Quantum Networks Industrial Partners Program to fund industry research being spearheaded by the University of Arizona.

Supporting Quotes:

"Cloud services will continue to proliferate in the coming years, increasing network traffic and requiring the industry to push forward on innovation to manage the required scale out architectures. Junipers commitment to deliver better, simpler networks requires us to engage and get ahead of these shifts and work with experts in all areas in order to trailblaze. I look forward to collaborating with these leading universities to reach new milestones for the network of the future."

- Raj Yavatkar, CTO, Juniper Networks

"With internet traffic continuing to grow and evolve, we must find new ways to ensure the scalability and reliability of networks. We look forward to exploring next-generation traffic engineering approaches with Juniper to meet these challenges."

Story continues

- Sonia Fahmy, Professor of Computer Science, Purdue University

"It is an exciting opportunity to work with a world-class partner like Juniper on cutting edge approaches to next-generation, intelligent multipath routing. Dartmouth's close collaboration with Juniper will combine world-class skills and technologies to advance multipath routing performance."

- George Cybenko, Professor of Engineering, Dartmouth University

"As network technology continues to evolve, so do operational complexities. The ability to utilize AI and machine learning will be critical in keeping up with future demands. We look forward to partnering with Juniper on this research initiative and finding new ways to drive AI forward to make the network experience better for end users and network operators."

- Jure Leskovec, Associate Professor of Computer Science, Stanford University

"The internet of today will be transformed through quantum technology which will enable new industries to sprout and create new innovative ecosystems of quantum devices, service providers and applications. With Juniper's strong reputation and its commitment to open networking, this makes them a terrific addition to building this future as part of the Center for Quantum Networks family."

- Saikat Guha, Director, NSF Center for Quantum Networks, University of Arizona

About Juniper Networks

Juniper Networks is dedicated to dramatically simplifying network operations and driving superior experiences for end users. Our solutions deliver industry-leading insight, automation, security and AI to drive real business results. We believe that powering connections will bring us closer together while empowering us all to solve the worlds greatest challenges of well-being, sustainability and equality. Additional information can be found at Juniper Networks (www.juniper.net) or connect with Juniper on Twitter, LinkedIn and Facebook.

Juniper Networks, the Juniper Networks logo, Juniper, Junos, and other trademarks listed here are registered trademarks of Juniper Networks, Inc. and/or its affiliates in the United States and other countries. Other names may be trademarks of their respective owners.

category-corporate

View source version on businesswire.com: https://www.businesswire.com/news/home/20220308005449/en/

Contacts

Dan MuozJuniper Networks+ 1 (408) 936-2145dmunoz@juniper.net

Continue reading here:

Juniper Networks Announces University Research Funding Initiative to Advance Artificial Intelligence and Network Innovation - Yahoo Finance

Posted in Ai | Comments Off on Juniper Networks Announces University Research Funding Initiative to Advance Artificial Intelligence and Network Innovation – Yahoo Finance

PNNL and Micron Partner to Push Memory Boundaries for HPC and AI – insideHPC

Posted: at 11:06 pm

Researchers at Pacific Northwest National Laboratory (PNNL) and Micron are are developing an advanced memory system to support AI for scientific computing. The work is designed to address AIs insatiable demand for live data to push the boundaries of memory-bound AI applications by connecting memory across processors in a technology strategy utilizing the Compute Express Link (CXL) data interface, according to a recent edition of the ASCR Discovery publication.

Most of the performance improvements have been on the processor side, said James Ang, PNNLs chief scientist for computing and the labs project leader. But recently, weve been falling short on performance improvements, and its because were actually more memory-bound. That bottleneck increases the urgency and priority in memory resource research.

Boise-Idaho-based memory and storage semiconductor company Micron is collaborating with PNNL, Richland, WA, on this effort, sponsored by the Advanced Scientific Computing Research (ASCR) program in the Department of Energy (DOE), to help assess emerging memory technologies for DOE Office of Science projects that employ artificial intelligence. The partners say they will apply CXL to join memory from various processing units deployed for scientific simulations.

Tony Brewer, Microns chief architect of near-data computing, says the collaboration aims to blend old and new memory technologies to boost high-performance computing (HPC) workloads. We have efforts that look at how we could improve the memory devices themselves and efforts that look at how we can take traditional high-performance memory devices and run applications more efficiently.

Part of the strategy is to implement a centralized memory pool would help mitigate the issue of over-provisioning the memory.

In HPC systems that deploy AI, high performance but low-capacity memory (typically gigabytes in capacity) is typically coupled to the GPUs, whereas a conventional system with low-performance but high capacity memory (terabytes) is loosely coupled via the traditional HPC workhorses, central processing units (CPUs), PNNL said. With PNNL, Micron will create proof-of-concept shared GPU and CPU systems and combine them with additional external storage devices in the hundreds of terabytes range. Future systems will need rapid access to petabytes of memory a thousand times more capacity than on a single GPU or CPU.

The intent is to create a third level of memory hierarchy, Brewer explains. The host would have some local memory, the GPU would have some local memory, but the main capacity memory is accessible to all compute resources across a switch, which would allow scaling of much larger systems. This unified memory would let researchers using deep-learning algorithms to run a simulation while its results simultaneously feed back to the algorithm.

A centralized memory system could also benefit operations because an algorithm or scientific simulation can share data with, say, another program thats tasked with analyzing those data. These converged application workflows are typical in DOEs scientific discovery challenges. Sharing memory and moving it around involves other technical resources, says Andrs Mrquez, a PNNL senior computer scientist. This centralized memory pool, on the other hand, would help mitigate the issue of over-provisioning the memory.

Because AI-aided data-driven science drives up demand for memory, an application cant afford to partition and strand the memory. The result: memory keeps piling up underutilized at various processing units. Having the capability of reducing that over-provisioning and getting more bang out of your buck by sharing that data across all those devices and different stages of workflow cannot be overemphasized, Mrquez explained.

Some of PNNLs AI algorithms can underperform when memory is slow to access, Mrquez says. In PNNLs computational chemistry group, for instance, researchers use AIto study waters molecular dynamics to see how it aggregates and interacts with other compounds. Water is a common solvent for commercial processes, so running simulations to understand how it acts with a molecule of interest is important. A separate research team at Richland is using AI and neural networks to modernize the power grids transmission lines.

Microns Brewer said he looks forward not only to the development of tools with PNNL but also for commercial use by any company working on large-scale data analysis. We are looking at algorithms, he said, and understanding how we can advance these memory technologies to better meet the needs of those applications.

PNNLs computational science problems provide Micron a way to observe applications that will most stress the memory.Those findings will help Brewer and colleagues develop products that help industry meet its memory requirements.

Ang, too, said he expects the project to help AI at large, pointing out that the Micron partnership isnt just a specialized one-off for DOE or scientific computing. The hope is that were going to break new ground and understand how we can support applications with pooled memory in a way that can be communicated to the community through enhancements to the CXL standard.

See the original post here:

PNNL and Micron Partner to Push Memory Boundaries for HPC and AI - insideHPC

Posted in Ai | Comments Off on PNNL and Micron Partner to Push Memory Boundaries for HPC and AI – insideHPC

Pittsburgh May Use AI to Fine Those Who Pass School Buses – Government Technology

Posted: at 11:06 pm

(TNS) The Pittsburgh Public Schools could soon implement an artificial intelligence system that captures information from vehicles that illegally pass stopped school buses and sends fines to their owners in an attempt to deter repeat infractions.

The city school board this month could agree to launch a pilot program with BusPatrol, a tech company that uses artificial and machine learning to promote safety for students traveling to and from school.

According to the company, which is based in Lorton, Va., and recently opened a Pennsylvania office in the city of Allentown, more than 136,000 school bus-related injuries and more than 1,000 fatalities have occurred in the past decade.

BusPatrol has partnered with school systems in several states and began working with districts in Pennsylvania a couple of years ago. The company installs software on school buses that has the capability of monitoring the vehicle's surroundings.

If a vehicle illegally passes a stopped bus, the device collects video of the offense and other information, including a license plate number, and turns it into an evidence package that it sends to police. If police then approve the citation, BusPatrol prints it and mails it to the vehicle owner, who can go online and view a video of their vehicle passing a stopped bus.

BusPatrol said the program has proven to increase safety because 98% of offenders do not get cited by the company a second time.

Mr. Souliere said the cost of the program is completely paid for by the ticket revenue, and the school district gets a large chunk of the $300 citation. A conservative estimate showed the district would receive $500,000 per 100 buses per year that it can invest back into schools, he said. About 500 buses carry students throughout the district.

School board member Pam Harbin said she had a "very strong reaction" to the idea that the district would have a program paid for by fining people who may simply make a mistake.

"I'm not going to agree to a system that's going to put people in that position," Ms. Harbin said. "I would rather have months and months of education and pay for that to improve behavior rather than saying we're going to harm people in a different way."

Mr. Souliere said the citation is a civil monetary penalty, no points are put on an offender's license, and vehicle insurance is not impacted. If necessary, he said, the fine can be paid over a certain period of time through a payment plan. But he noted that failure to pay the fine could result in the suspension of a license or plates not being renewed.

Before citations start being issued, the company blitzes media in communities where the program is implemented to provide an education or reminder of the law. The company places informational television commercials, works with local media and creates educational videos for schools and other entities.

School board member Tracey Reed said she was concerned about police using the information that the system collects for other purposes, such as fining individuals for other issues with their vehicles. Mr. Souliere, though, said police are not allowed to do that.

"The scope of use by law is exclusively for the enforcement of this specific stop arm infraction," he said. "In fact, if someone were to get caught by a police officer in passing a stopped school bus, the civil penalty would no longer apply it would be the criminal one that would apply."

The software has the ability to film inside the bus and can provide video in the instance of a fight or an accident, according to Mr. Souliere.

Michael McNamara, the district's chief operating officer, said that if the board approves the pilot program, the software will be placed on about 20 buses in various areas of the city. No citations would be issued during the pilot.

If the board approves the full program after the pilot period, the technology would be installed on all school buses with stop arms that serve Pittsburgh students.

"If this is successful and the board is on board with it no pun intended there we would then deploy [the software] over the summer to the rest of our buses that have stop arms, and then start collecting ticket revenue beginning the first day of the new school year next year," Mr. McNamara said. "No tickets would be issued, only warnings, the rest of this year, and then we would be able to educate the rest of the summer and have full deployment first day of school next fall."

2022 the Pittsburgh Post-Gazette, Distributed by Tribune Content Agency, LLC.

Go here to see the original:

Pittsburgh May Use AI to Fine Those Who Pass School Buses - Government Technology

Posted in Ai | Comments Off on Pittsburgh May Use AI to Fine Those Who Pass School Buses – Government Technology