The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Ai
How Ivanti hopes to redefine cybersecurity with AI – VentureBeat
Posted: March 8, 2022 at 11:06 pm
Join today's leading executives online at the Data Summit on March 9th. Register here.
Widening gaps in cybersecurity tech stacks are leaving enterprises vulnerable to debilitating attacks. Making matters worse, there are often conflicting endpoints, patch management and patch intelligence systems that partially support a small subset of all devices. CISOs tell VentureBeat that gaps in their cybersecurity tech stacks are getting wider because their legacy systems cant integrate across unified endpoint management (UEM), asset management, IT Service Management (ITSM) and cost management data available in real time to optimize cybersecurity deterrence strategies and spending.
Ivantis quickness in using AI and machine learning to take on these challenges is noteworthy. In the span of fewer than eighteen months, theyve delivered their AI-based Ivanti Neurons platform to enterprise customers and continued to innovate it. The company first introduced the Ivanti Neurons platform in July 2020, empowering organizations to autonomously self-heal and self-secure devices and self-service end users.
Since then, Ivanti has released updates and added innovations to the platform on a quarterly basis to further help customers quickly and securely embrace the future of work. For example, Ivanti recently released Ivanti Neurons for Zero Trust Access, the first AI-based solution to support organizations fine-tuning their zero trust frameworks. The company also introduced Ivanti Neurons for Patch Management, a cloud-native solution that enables IT teams to efficiently prioritize and remediate the vulnerabilities that pose the most danger to their organizations.
In the same period, Ivanti acquired MobileIron, Pulse Secure, Cherwell, RiskSense, and the Industrial Internet of Things (IIoT) platform owned by theWIIOGroup. Their total addressable market has doubled due to these acquisitions, reaching $30 billion this year, growing to $60 billion by 2025. Ivanti has 45,000 customers, providing cybersecurity systems and platforms for 96 of the Fortune 100.
Ivanti is successfully scaling its AI-based Neurons platform across multiple gaps in enterprises cybersecurity tech stacks. VentureBeat recently spoke with Ivantis CEO, Jeff Abbott, and president and chief product officer Nayaki Nayyar to gain further insight on Ivantis growth and success . The companys executives detailed how Ivantis approach to integrating AI and machine learning into its Neurons platform will help its customers anticipate, deter and learn from a wide variety of cyberattacks.
VentureBeat: Why do new customers choose an AI-based solution like Ivanti Neurons over the competing, substitute solutions in the market?
Jeff Abbott: Were looking to AI, machine learning, and related technologies to create a richer experience for our customers while continually delivering innovative and valuable new capabilities. Were leveraging AI & machine learning bot technology to solve common challenges that our customers are facing. The example I like is discovery. The process of understanding whats on a network. I talk to customers all the time, and one that comes to mind is a superintendent of a school district who said, Every six months we send out teams to go to all the various locations of various schools and see whats on the network physically or we run protocols on site. Now with your bot technology, we can do that on a nightly basis and discover whats there. Thats an example of how our unified platform increases visibility for our customers, while continually staying on top of security standards.
Its fascinating to consider all the opportunities the metadata from UEM, IT service management (ITSM) / IT asset management (ITAM), and cost management systems provide. Having the metadata from all three systems on a single pane of glass becomes very interesting to what we can tell customers about their operations down to the device level. Creating a data lake based on the metadata becomes a powerful tool. Having a broad base of contextual data to analyze with the Ivanti Neurons platform enables us to gain a new understanding of whats happening. Were relying on AI and machine learning in the context of the Ivanti Neurons platform to scale from providing basic information up to contextually intelligent insights our customers can use to grow their businesses.
Nayaki Nayyar: I was in the oil and gas industry for 15 years, working with Shell and Valero Energy for many years. So, Ive lived in the customers shoes and can empathize with three big problems theyre facing today, regardless of the industry they are in
The first is the explosive growth of edge devices, including mobile devices, laptops, desktops, wearables and, to some extent, IoT devices. Thats a big challenge that everyone has to address. Then the second problem is ransomware. Not a single day goes by without a ransomware attack. And the third is how to provide a great customer experience that equals the quality of everyday consumer experiences. Solving how to bring a consumer-grade experience into an enterprise context is an area were prioritizing today.
Our goal is to automate tasks beneath the user experience layer of our applications, so our customers dont have to worry about them; let AI, machine learning, and deep learning capabilities heal endpoints, using intelligent bots for endpoint discovery, self-healing, asset management and more. Our goal is to provide customers with an experience where the routine tasks are managed autonomously, so they dont have to. The Ivanti Neurons platform is designed to take on these challenges and more.
VentureBeat: How are you fine-tuning your algorithms to fight ransomware so that your customers dont have to become data scientists or consider recruiting a data scientist?
Nayaki Nayyar: I will highlight two distinct AI capabilities that we have to address your exact question on preventing ransomware. We have what we call Ivanti Neurons for Edge Intelligence, which provides a 360-degree view of all the devices across a network, and using NLP, weve designed the platform so its flexible enough to respond to questions and queries. An example would be, How many devices on my network are not patched correctly or have not been patched for these specific vulnerabilities? The Ivanti Neurons platform will automatically respond to simple text-based and keyword searches. So, our customers can ask a question using natural language, and the system will respond to it.
Weve also developed deep expertise in text ranking. We mine data from various social channels, including Twitter, Reddit, and publicly available sources. We then do sentiment analysis on various Common Vulnerabilities and Exposures (CVEs) that are trending and sentiment analysis on the patches. Then we provide those insights in Ivanti Neurons for Patch Intelligence. Using NLP, sentiment analysis, and AI, Ivanti Neurons for Patch Intelligence provides our customers administrators with the insights they need to prioritize which CVEs have the highest risks for their organization and then remediate those issues immediately. That doesnt require data scientists to be employed by our customers. All of that is being embedded into our stack, and we make it simple for customers to consume it.
Jeff Abbott: Were also constantly doing research on ransomware and vulnerabilities. In fact, we just released our Ransomware Spotlight Year-End Report. The analysis shows that the bad actors target organizations that are not keeping up with CVEs.
Not keeping up with zero-day vulnerabilities and defining a plan for addressing them can make any organization a gazelle in the middle of the field. So, as Nayaki said, were providing patch intelligence to help our customers prioritize which vulnerabilities are most important to address first. One of the factors that led to us acquiring RiskSense is their extensive data set on detection. Were using the data to provide forward intelligence on the open vulnerabilities and help our customers anticipate and fix them quickly. Were seeing that our mid-tier and SMB accounts need patch intelligence as much as our enterprise customers.
VentureBeat: How does AI deliver measurable value for customers? How do you quantify that and know you are meeting expectations with customers, that youre delivering value?
Nayaki Nayyar: For many years, solving security, IT or asset issues was a reactive process. Every customer called or filed a ticket right after the issue happened, reporting the issue. The ticket was created, then it was routed to the right service desk agent to solve it. But that took too much time, possibly ten days later or even a month later, before the ticket was resolved.
The Ivanti Neurons platform is designed to detect security, IT, asset, endpoint, or discovery issues before the end-user knows that issue will happen. Our bots are also designed to be self-healing and they can detect whether its a configuration drift that has happened on a device, or whether it is a security anomaly or a performance issue. Bots automatically heal those issues, so end users dont even have to create a ticket and route the ticket to get a resolution.
If we can help customers reduce the number of issues by 30% or more before end users even create tickets, then that represents a massive cost saving. Not to mention the speed and accuracy at which those services are provided.
VentureBeat: Which customer needs are the most urgent and best met by expanding the AI capabilities of your Ivanti Neurons platform?
Nayaki Nayyar: Today, discovering unknown assets or endpoints is an urgent, high-priority requirement. The greatest challenge is blind-spot detection within an organization. Weve architected Ivanti Neurons to detect blind spots across enterprise networks. Our customers are using Neurons to identify assets regardless of their locations, whether they are in data centers, cloud assets, endpoints, or IoT assets.
Discovery is most often step one for our customers on the Ivanti Neurons platform because it helps them turn their unknown assets into known assets immediately. They dont need to remediate and self-heal devices right away; that can come later in the asset cycle. Ivanti Neurons for Discovery are a critically important solution that customers get immediate benefit from and then can expand upon.
Most customers have what we call a Frankensteins mess of tools and technologies to manage their devices By combining our Neurons platform with the technologies from our recently acquired companies, were now providing a single pane of glass, so an analyst can log in, see what device types are on the network, and manage any endpoint security or asset management problems right from there.
Jeff Abbott: Patching is overly complex and time-consuming, and thats a huge problem our customers also face. Ivanti Neurons for Patch Management and Patch Intelligence help solve those challenges for our customers. Were focused on improving user experiences to make AI and NLP-based patch management and intelligence less intimidating. Our focus is specifically on helping our customers keep up with the latest zero-day vulnerabilities and CVEs that could impact them. We focus on solving the biggest risk areas first using Ivanti Neurons, alleviating the time-consuming work our customers would otherwise have to go through.
VentureBeat: What are the Ivanti Neurons platforms top three design goals, and how do you benchmark success for those?
Jeff Abbott: Our primary goals are for the Ivanti Neurons platform to discover devices, and then self-heal and self-secure themselves using AI-based workflows and technologies. Our internal research shows that customers using Neurons are experiencing over 50% reductions in support call times. Theyre also eliminating duplicate work between IT operations and security teams and reducing the number of vulnerable devices by 50%. These stats are all from customer surveys and anonymized actual results. Ivanti Neurons is also contributing to reducing unplanned outages by 63%.
Nayaki Nayyar: Adding to what Jeff said, the entire architecture is container-based. We leverage containers that are cloud-agnostic, meaning we can deploy them anywhere. So, one goal is not just to deploy to the cloud, but also to drop these containers on the edge in the future so that we can process those workloads at the edge, closer to where the data is getting generated.
The platform is also all API-based, so the integration we do within the stack is all based on APIs, This means that our customers dont need to have the entire stack. They can start anywhere and evolve at their own pace. They can start in the security space in patch management and move from there. Or they can start in service management or discovery. They can start anywhere and evolve everywhere. And we also recognize that they dont need to have just Ivantis entire stack. They can be using two or three pillars from us and other systems and platforms from other vendors.
VentureBeat: Do you see customers moving to an AI-based platform to scale zero trust initiatives further out?
Nayaki Nayyar: Yes, we have a large manufacturing customer who was evolving from VPN-based access into zero trust. This is a big paradigm shift. With VPN-based access, youre pretty much giving users access to everything, whereas, with a zero-trust approach, youre continuously validating and authenticating every application access. As the customer was switching to zero trust, their employees were running into many access denied issues. The volume of tickets coming into the service deck spiked by 500%.
The manufacturing customer started using Ivanti Neurons with AI and ML-based bots to detect what kind of access issues users were having and self-heal those issues based on the right amount of access. The ticket volume immediately went down. So, it was a great example of customers evolving beyond VPN to zero trust access; our technology can help customers advance zero-trust and solve access challenges.
VentureBeat: What additional verticals are you looking at beyond healthcare? For example, will there be an Ivanti Neurons for Supply Chain Management, given how many constraints they have become in the last year to eighteen months, for example?
Nayaki Nayyar: Im extremely passionate about IoT and whats happening with edge devices today. The transformation that we see at the edge is phenomenal. Were designing support for edge devices into the Ivanti Neurons platform today, giving our customers the flexibility of managing IoT assets.
Healthcare is one of the verticals where we have gone deep into discovering and managing our customers many healthcare devices, especially those you see in a hospital setting like Kaiser.
Manufacturing facilities or shop floor is another area we are exploring. Our customers have different types of ruggedized IoT devices that we can apply the same principles of discovering, managing, and providing security to the IoT assets on the shop floor. In the future, we also plan on extending into the telco space. We have large telcos as customers, and theyve been asking us to go more and more into the telco IoT world.
Our telco customers also tell us they would like to see greater support for ruggedized devices their field technicians use out in the field. Retailers are also expressing an interest in supporting ruggedized devices, which is an area were exploring today.
Jeff Abbott: The public sector comprising federal, state, and local have unique requirements, of which Nayaki and I have had several conversations about. Many capabilities for vertical markets are still very horizontal. Were seeing that as organizations discover the nuances of their use of edge computing and edge technology, more specialized vertical market requirements will become more dominant. I think were covering 90% or more of the security requirements now. Thats especially the case in discovery, patch management, and patch intelligence.
VentureBeat: How do you integrate an AI-based platform into a legacy system tech stack or infrastructure? What are the most valuable technologies for accomplishing that, for example, APIs?
Nayaki Nayyar: We have a pretty strong connector base with existing systems. I wont call them a legacy. We need to coexist with existing systems, as many have been installed for 10 to 15 years at a minimum in many organizations. To accomplish this, we have 300 or more connectors out of the box that can be leveraged by our customers, resellers, and partners. Were committed to continually strengthening our ecosystem of partners to provide customers with the options they need for their unique integration requirements.
VentureBeat: Could you share the top three lessons Ivanti has learned, designing intuitive user experiences to guide users using AI-based applications?
Jeff Abbott: I think the most important lesson learned is to provide every customer, from SMBs to enterprises, data-driven insights that validate AI is performing appropriately. Ensuring that self-healing, self-servicing, and all supporting aspects of Ivanti Neurons protect customers assets while also contributing to more efficient ITSM performances.
When it comes to preventing ransomware attacks, the key is to always provide users with the option of performing an intuitive double-check. One day your organization could be very healthy. But, on the other hand, you may not be paying attention to the intuitive signals from AI, which could lead to the organization falling victim to an attack. Taking an active position on security, which includes knowing your organizations tools and understanding what they can achieve, is important.
Nayaki Nayyar: User experiences require a three-prong approach. Start by concentrating first with humans in the loop, recognizing the unique need for contextual intelligence. Next, add the need for augmented AI, and then the last level of maturity is humans out of the loop.
For customers, this translates into taking the three layers of maturity and identifying how and where user experience designs deliver more contextual intelligence. The goal with Ivanti Neurons is to remove as many extraneous interactions with users as possible, saving their time only for the most unique, complex decision trade-offs that need to be made. Our goal is to streamline routine processes, anticipate potential endpoint security, patch management, and ITSM-related tasks, and handle them before a user sees their impact on productivity and getting work done.
VentureBeat: With machine learning models so dependent on repetitive learning, how did you design the Ivanti Neurons platform and related AI applications to continually learn from data without requiring customers to have data scientists on staff?
Nayaki Nayyar: Were focused on making Ivanti Neurons as accessible as possible to every user. Weve created an Employee Experience Score, a methodology to identify how effective our customers experiences are on our platform to achieve that. Using that data, we can tell which application workflows need the most work to further improve usability and user experiences and which ones are doing so well that we can use them as models for future development.
Were finding this approach to be very effective in quantifying [whether] were meeting expectations or not by individual, employee, division, department, and persona. This approach immediately gets organizations out of using ticket counts as a proxy for user experience. Closing tickets alone is not the SLA that needs to be measured alone. Its more important to quantify the entire experience and seek new ways to improve it.
VentureBeat: How do you evaluate potential acquisitions, given how your product and services strategy moves in an AI-centric direction? What matters most in potential acquisitions?
Jeff Abbott: Were prioritizing smaller acquisitions that deliver high levels of differentiation via their unique technologies first, followed by their potential contributions to our total addressable markets. Were considering potential acquisitions that could strengthen our vertical tech stack in key markets. Were also getting good feedback directly from customers and our partners on where we should look for new acquisitions. But Id like to be clear that its not just acquisitions.
We also have very interesting partnerships forming across industries, focusing on telco carriers globally. Some of the large hardware providers have also proposed interesting joint go-to-market strategies, which we think will be groundbreaking with the platform. Were also looking at partnerships that create network effects across our partnership and customer base. Thats what were after in our partnership strategy, especially regarding the interest were seeing on the part of large telco providers today. So, were going to be selective. We will go after those that put us in a differentiation category. The good news is that many nice innovative companies are getting into that level of maturity.
Where we can partner or acquire them, were focused on not disrupting the trajectory theyre on. It creates a much bigger investment portfolio to continue to advance those solutions.
Nayaki Nayyar: Were very deliberate in what acquisitions we do for two primary reasons. One is to strengthen the markets that we play in. We compete in three markets today, and our recent acquisitions strengthen our position in each. Our goal is to be among the top two or top three in each market were competing in. An integral part of our acquisition strategy is looking at how a potential acquisition can increase our entire addressable market and gain access to adjacent markets that we can start to grow in.
We are in three markets: UEM, security, and service management. As were converging these three pillars into our Ivanti Neurons platform, we are evolving into adjacent markets like DEX (Digital Experience Management) So far, our approach of relying on acquisitions to strengthen our core three markets is working well for us. To Jeffs point, strengthening what we have to further to be a top vendor in these markets is working, delivering strong, differentiated value to our customers.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More
Continued here:
How Ivanti hopes to redefine cybersecurity with AI - VentureBeat
Posted in Ai
Comments Off on How Ivanti hopes to redefine cybersecurity with AI – VentureBeat
Disrupting Product Designing with the Much-Needed AI Makeover – Analytics Insight
Posted: at 11:06 pm
The product designing sector is creating new opportunities as it receives an AI transformation
Over the last couple of years, AI has been making great strides across distinct global industries. AI technologies are integrated with businesses for promoting efficiency and developing automation to carry out rigorous tasks and other activities that relate to stocks, finance, marketing, healthcare, and that too, using smart devices. Tech experts believe that between the years 2040 to 2050, AI will be capable of performing intellectual tasks that were traditionally only carried out by humans. Currently, artificial intelligence is creating a myriad of opportunities for businesses and global industries to achieve the best standards of quality while delivering consumer products or services. Similarly, AI has also transformed product designing and development to quite an extent. AI can help solve issues in the field of creativity and solve complex problems. When it comes to product designing and development, the role of AI cannot be ignored. Integrating AI in product designing and development has entirely transformed the dynamics of the relationship between businesses and consumers.
From startups to large enterprises, everyone is racing to get their new products launched, and AI and machine learning are making robust contributions to this process. Rapid advances in AI-based applications, products, services, and platforms have also driven the consolidation of the IoT market. The IoT platforms provide concentrated solutions for business challenges in vertical markets that stand the best chance of surviving the upcoming IoT advancements. Since AI and ML are getting ingrained in product designing, more related platforms and products now need to adapt to the upcoming circumstances. An advanced and efficient product design company will integrate robust IoT products and AI services to ensure that the customers enjoy the best quality products and the enterprise teams seamlessly retain more customers.
Without sounding too futuristic or advanced, it is quite safe to say that AI will most likely outrun human resources in terms of intellectual activity processing in the near future. One of the many critical aspects of the phenomenon of AI interfering in human lives and experiences is that it has enhanced customer expectations for high-quality products. As a result, the global market is drenched in competitiveness as enterprises strive towards better products with superior design and enhanced performance standards. And slowly, it has turned into a mandate instead of being just a necessity.
Major tech companies are using AI to reduce energy and financial costs while designing and developing new products. AI can also be helpful to human creative leads in product designing by taking the mundane and rigorous tasks off their hands. The technology is meant to help professionals to have an easier, more fulfilling life but not take it away. Furthermore, one of the toughest phases of new product development is designing its user experience. To ensure the success of a product, it is crucial for enterprise leaders to ensure that the product is relatable enough for the users. The design process requires massive amounts of creative muscle, especially when the team must intricately think about how the product will be used, validate those ideas, and create something out of the box. Using an AI-driven brainstorming tool for such purposes might serve as a useful tool.
Besides this, AI is efficiently capable of letting the team know beforehand during the design phase whether or not a specific design will be successful or is deemed to fail. The system can explore the proposed user flow and determine whether a user can complete the desired action or not. This saves the company from having to build multiple iterations of a product for testing.
AI is here to stay. It does not matter if the industry is tech-based or otherwise, AI-driven systems and algorithms will create groundbreaking innovations and lead global enterprises towards immense profitability to help businesses reach new highs.
Share This ArticleDo the sharing thingy
About AuthorMore info about author
Analytics Insight is an influential platform dedicated to insights, trends, and opinions from the world of data-driven technologies. It monitors developments, recognition, and achievements made by Artificial Intelligence, Big Data and Analytics companies across the globe.
Link:
Disrupting Product Designing with the Much-Needed AI Makeover - Analytics Insight
Posted in Ai
Comments Off on Disrupting Product Designing with the Much-Needed AI Makeover – Analytics Insight
1 Artificial Intelligence Growth Stock to Buy Now and Hold for the Long Term – The Motley Fool
Posted: at 11:06 pm
Artificial intelligence (AI) promises to be one of the most transformative technologies of our time. It has already proven it can reliably complete complex tasks almost instantaneously, eliminating the need for days or even weeks of human input in many cases.
The challenge for companies developing this advanced technology is building a business model that can deliver it efficiently since AI is a brand-new industry with little existing precedent. That's what makes C3.ai ( AI -1.96% ) a trailblazer, as it's the first platform AI provider helping companies in almost any industry access the technology's benefits.
C3.ai just reported its fiscal 2022 third-quarter earnings result, and it revealed continued growth across key metrics, further cementing the case for owning its stock for the long run.
Image source: Getty Images.
As more of the economy transitions into the digital realm, a growing number of companies will find themselves with access to game-changing tech like artificial intelligence. In the second quarter of fiscal 2022, C3.ai said it was serving 14 different industries, double the amount from the corresponding quarter in the previous year. It indicates that more sectors are already proactively seeking the benefits of AI.
One of those sectors is oil and gas, which represents the largest portion of C3.ai's total revenue. The company has a long-standing partnership with oil giant Baker Hughes. Together, the two companies have developed a suite of AI applications to predict critical equipment failures and reduce carbon emissions in drilling and production operations.
Shellis a core customer of these applications, and it's using them to monitor 10,000 devices and 23 large-scale oil assets, with the technology processing 1.3 trillion predictions per month.
In the recent Q3 of fiscal 2022, C3.ai revealed a new partnership with the U.S. Department of Defense worth $500 million over the next five years. It's designed to accelerate the adoption of AI applications across the defense segment of the federal government.
But some of C3.ai's most impressive partnerships are those with tech behemoths like Microsoft and Alphabet's Google. They're collaborating with C3.ai to deploy AI applications in the cloud to better serve their customers in manufacturing, healthcare, and financial services, among other industries.
From the moment a potential customer engages C3.ai, it can take up to six months to deploy their AI application. Therefore, it's important to watch the company's customer count as it can be a leading indicator for revenue growth in the future.
In fiscal Q3 2022, C3.ai reported having 218 customers, which was an 81% jump over Q3 2021. Over the same period, remaining performance obligations (which are expected to convert to revenue in the future) climbed by 90% to $469 million.
Since quarterly revenue grew a more modest 42% in the same time span, both of the above metrics hint at a potential revenue-growth acceleration over the next few years. The company has also raised its sales guidance twice so far in the first nine months of fiscal 2022, albeit by just 2% in total, now estimating $252 million in full-year revenue.
C3.ai has been a publicly traded company for a little over a year, listing in December 2020. It quickly rallied to its all-time high stock price of $161 before enduring a painful 87% decline to the $20 it trades at today. The company hasn't grown as quickly as investors anticipated, and it also hasn't achieved profitability yet.
But right now, C3.ai trades at a market valuation of $2.1 billion, and it has over $1 billion in cash and short-term investments on its balance sheet. Put simply, investors are only attributing a value of around $1 billion to its AI business despite over $250 million in revenue expected by the close of fiscal 2022 and a portfolio of A-list customers.
Moreover, C3.ai has a gross profit margin of 80%, affording it plenty of optionality when it comes to managing expenses. This places it in a great position to eventually deliver positive earnings per share to investors once it achieves a sufficient level of scale.
While C3.ai stock carries some risk, especially in the middle of the current tech sell-off, by many accounts it's beginning to look like an attractive long-term bet. Advanced technologies like AI will only grow in demand over time, and this company is a great way to play that trend.
This article represents the opinion of the writer, who may disagree with the official recommendation position of a Motley Fool premium advisory service. Were motley! Questioning an investing thesis even one of our own helps us all think critically about investing and make decisions that help us become smarter, happier, and richer.
See the article here:
1 Artificial Intelligence Growth Stock to Buy Now and Hold for the Long Term - The Motley Fool
Posted in Ai
Comments Off on 1 Artificial Intelligence Growth Stock to Buy Now and Hold for the Long Term – The Motley Fool
AlphaFold, GPT-3 and How to Augment Intelligence with AI (Pt. 2) – A16Z Future
Posted: at 11:06 pm
As we saw in Part 1, its possible to get started on a number of important problems by building augmentation infrastructure around the strengths of an artificial intelligence model. Does the model generate text? Build around text. Can it accurately predict 3D structures? Build around 3D structures. But taking an artificial intelligence system completely at face value comes with its own limitations.
Douglas Engelbart used the term co-evolution to describe the way in which humanitys tools and its processes for using those tools adapt and evolve together. Models like GPT-3 and DALL-E represent a large step in the evolution of tools, but its only one half of the equation. When you build around the model, without also building new tools and processes for the model, youre stuck with what you get. The models weaknesses become your weaknesses. If you dont like the result, its up to you to fix it. And since training any of the large, complex AI systems weve discussed so far requires massive data and computation, you likely dont have the resources to change the model all that much.
This is a bit of a conundrum: On the one hand, we dont have the resources to change the model significantly. On the other hand, we need to change the model, or at least come up with better ways of working with it, to solve for our specific use case. For prompt-based models like GPT-3 and DALL-E, the two easiest ways to tackle this fixed-model conundrum are prompt-hacking and fine-tuning neither of which are particularly efficient:
The goal of augmented intelligence is to make manual processes like these more efficient so humans can spend more time on the things they are good at, like reasoning and strategizing. The inefficiency of prompt-hacking and fine-tuning show that the time is ripe for a reciprocal step in process evolution. So, in this section, well explore some examples of a new theme building for the model and the role it plays in creating more effective augmentation tools.
As a working example, lets say youre an up-and-coming game developer working on the next online gaming franchise. Youve seen how games like Call of Duty and Fortnite have created massively successful (and lucrative) marketplaces for custom skins and in-game assets, but youre a resource-constrained startup. So, instead of developing these assets yourself, you offload content generation to DALL-E, which can generate any number of skins and asset styles for a fraction of the cost. This is a great start, but prompt-hacking your way to a fully stocked asset store is inefficient.
To make things less manual, you can turn prompting over to a text generation model like GPT-3. The key to the virality of a game like Fortnite is the combination of a number of key game assets weapons, vehicles, armor with a variety of unique styles and references, such as eye-catching patterns/colors, superheroes, and the latest pop culture trends. When you seed GPT-3 with your asset types, it can generate any number of these combinations into a prompt. Pass that prompt over to DALL-E, and out comes your skin design.
This GPT-3 to DALL-E handoff sounds great, but it only really works if it produces stimulating, high-quality skin designs for your users. Combing through each of the design candidates manually is not an option, especially at scale. The key here is to build tools that let the marketplace do the work for you. Users flock to good content and have no patience for bad content apps like TikTok are based entirely on this concept. User engagement will therefore be a strong signal for which DALL-E prompts are working (i.e., leading to interesting skin designs) and which are not.
To let your users do the work for you, youll want to build a recursive loop that cross-references user activity with each prompt and translates user engagement metrics into a ranking of your active content prompts. Once you have that, normal A/B testing will automatically surface prompt insights and you can prioritize good prompts, remove bad prompts, and even compare the similarity of newly generated prompts to those you have tested before.
But thats not all the same user engagement signal can also be used for fine-tuning.
Lets move one more step backward and focus on GPT-3s performance. As long as you keep track of the inputs you are giving to GPT-3 (asset types + candidate themes), you can join that data with the quality rankings you have just gotten from further down in your content pipeline to create a dataset of successful and unsuccessful input-output pairs. This dataset can be used to fine-tune GPT-3 on game-design-focused prompt generation, making it even better at generating prompts for your application.
This user-driven cyclical pipeline helps DALL-E generate better content for your users by surfacing the best prompts, and helps GPT-3 generate better prompts by fine-tuning on examples generated from your own user activity. Without having to worry about prompt-hacking and fine-tuning, you are free to work on bigger-ticket items, like which assets are next in the pipeline, and which new content themes might lead to even more interesting skins down the road.
There also exists a huge opportunity to build middleware connecting creative industries and creative, personalized content-generating models.AI models and the services they enable (e.g. Copilot) could help for use cases that require novel content creation. This, again, requires using our understanding of the AI system and how it works to think of ways in which we can modify its behavior ever so slightly to create new and better experiences.
Imagine you are building a service for learning to code that uses Copilot under the hood to generate programming exercises. Out of the box, Copilot will generate anywhere from a single line of code to a whole function, depending on the docstring its given as input. This is great you can construct a bunch of exercises really quickly!
To make this educational experience more engaging, though, youll probably want to tailor the exercises generated by Copilot to the needs and interests of your users. For example, you might want to personalize across dimensions such as:
Generating docstrings yourself is tedious and manual, so personalizing Copilots outputs should be as automated as possible. Well, we know of another AI system, GPT-3, that is great at generating virtually any type of text so maybe we can offload the docstring creation to GPT-3.
This can be done in one of two ways. One approach is to ask GPT-3 to generate generic docstrings that correspond to a particular skill or concept (e.g., looping, recursion, etc.). With one prompt, you can generate any number of boilerplate docstrings. Then, using a curated list of target themes and keywords (a slight manual effort), you can replace variable names in the boilerplate to your target audience. Alternatively, you can try feeding both target skills/concepts and themes to GPT-3 at the same time and let GPT-3 tailor the docstrings to your themes automatically.
The success of this idea, of course, comes down to the quality of GPT-3s content. For one, youll want to make sure the exercises generated by this GPT/Copilot combination are age-appropriate. Perhaps an aligned model like InstructGPT would be better here.
We are now over a decade into the latest AI summer. The flurry of activity in the AI community has led to incredible breakthroughs that will have significant impact across a number of industries and, possibly, on the trajectory of humanity as a whole. Augmented intelligence represents an opportunity to kickstart this progress, and all it takes is a slight reframing of our design principles for building AI systems. In addition to building models to solve problems, we can think of new ways to build infrastructure around models and for models; and even ways in which foundation models might work together (like GPT-3 and DALL-E or GPT-3 + CoPilot).
Maybe one day we will be able to offload all of the dirty work of life to some artificial general intelligence and live hakuna-matata style, but until that day comes we should think of Engelbart focusing less on machines that replace human intelligence and more about those that are savvy enough to enhance it.
Posted March 8, 2022
Technology, innovation, and the future, as told by those building it.
Check your inbox for a welcome note.
Go here to read the rest:
AlphaFold, GPT-3 and How to Augment Intelligence with AI (Pt. 2) - A16Z Future
Posted in Ai
Comments Off on AlphaFold, GPT-3 and How to Augment Intelligence with AI (Pt. 2) – A16Z Future
Juniper Networks Announces University Research Funding Initiative to Advance Artificial Intelligence and Network Innovation – Yahoo Finance
Posted: at 11:06 pm
Research to focus on AI, ML, Routing and Quantum Networking for advanced communication
SUNNYVALE, Calif., March 08, 2022--(BUSINESS WIRE)--Juniper Networks (NYSE: JNPR), a leader in secure, AI-driven networks, today announced a university funding initiative to fuel strategic research to advance network technologies for the next decade. Junipers goal is to enable universities, including Dartmouth, Purdue, Stanford and the University of Arizona, to explore next-generation network solutions in the fields of artificial intelligence (AI) and machine learning (ML), intelligent multipath routing and quantum communications.
Investing now in these technologies, as organizations encounter new levels of complexity across enterprise, cloud and 5G networks, is critical to replace tedious, manual operations as networks become mission critical for nearly every business. This can be done through automated, closed-loop workflows that use AI and ML-driven operations to scale and cope with the exponential growth of new cloud-based services and applications.
The universities Juniper selected in support of this initiative are now beginning the research that, once completed, will be shared with the networking community. In addition, Juniper joined the Center for Quantum Networks Industrial Partners Program to fund industry research being spearheaded by the University of Arizona.
Supporting Quotes:
"Cloud services will continue to proliferate in the coming years, increasing network traffic and requiring the industry to push forward on innovation to manage the required scale out architectures. Junipers commitment to deliver better, simpler networks requires us to engage and get ahead of these shifts and work with experts in all areas in order to trailblaze. I look forward to collaborating with these leading universities to reach new milestones for the network of the future."
- Raj Yavatkar, CTO, Juniper Networks
"With internet traffic continuing to grow and evolve, we must find new ways to ensure the scalability and reliability of networks. We look forward to exploring next-generation traffic engineering approaches with Juniper to meet these challenges."
Story continues
- Sonia Fahmy, Professor of Computer Science, Purdue University
"It is an exciting opportunity to work with a world-class partner like Juniper on cutting edge approaches to next-generation, intelligent multipath routing. Dartmouth's close collaboration with Juniper will combine world-class skills and technologies to advance multipath routing performance."
- George Cybenko, Professor of Engineering, Dartmouth University
"As network technology continues to evolve, so do operational complexities. The ability to utilize AI and machine learning will be critical in keeping up with future demands. We look forward to partnering with Juniper on this research initiative and finding new ways to drive AI forward to make the network experience better for end users and network operators."
- Jure Leskovec, Associate Professor of Computer Science, Stanford University
"The internet of today will be transformed through quantum technology which will enable new industries to sprout and create new innovative ecosystems of quantum devices, service providers and applications. With Juniper's strong reputation and its commitment to open networking, this makes them a terrific addition to building this future as part of the Center for Quantum Networks family."
- Saikat Guha, Director, NSF Center for Quantum Networks, University of Arizona
About Juniper Networks
Juniper Networks is dedicated to dramatically simplifying network operations and driving superior experiences for end users. Our solutions deliver industry-leading insight, automation, security and AI to drive real business results. We believe that powering connections will bring us closer together while empowering us all to solve the worlds greatest challenges of well-being, sustainability and equality. Additional information can be found at Juniper Networks (www.juniper.net) or connect with Juniper on Twitter, LinkedIn and Facebook.
Juniper Networks, the Juniper Networks logo, Juniper, Junos, and other trademarks listed here are registered trademarks of Juniper Networks, Inc. and/or its affiliates in the United States and other countries. Other names may be trademarks of their respective owners.
category-corporate
View source version on businesswire.com: https://www.businesswire.com/news/home/20220308005449/en/
Contacts
Dan MuozJuniper Networks+ 1 (408) 936-2145dmunoz@juniper.net
Continue reading here:
Posted in Ai
Comments Off on Juniper Networks Announces University Research Funding Initiative to Advance Artificial Intelligence and Network Innovation – Yahoo Finance
PNNL and Micron Partner to Push Memory Boundaries for HPC and AI – insideHPC
Posted: at 11:06 pm
Researchers at Pacific Northwest National Laboratory (PNNL) and Micron are are developing an advanced memory system to support AI for scientific computing. The work is designed to address AIs insatiable demand for live data to push the boundaries of memory-bound AI applications by connecting memory across processors in a technology strategy utilizing the Compute Express Link (CXL) data interface, according to a recent edition of the ASCR Discovery publication.
Most of the performance improvements have been on the processor side, said James Ang, PNNLs chief scientist for computing and the labs project leader. But recently, weve been falling short on performance improvements, and its because were actually more memory-bound. That bottleneck increases the urgency and priority in memory resource research.
Boise-Idaho-based memory and storage semiconductor company Micron is collaborating with PNNL, Richland, WA, on this effort, sponsored by the Advanced Scientific Computing Research (ASCR) program in the Department of Energy (DOE), to help assess emerging memory technologies for DOE Office of Science projects that employ artificial intelligence. The partners say they will apply CXL to join memory from various processing units deployed for scientific simulations.
Tony Brewer, Microns chief architect of near-data computing, says the collaboration aims to blend old and new memory technologies to boost high-performance computing (HPC) workloads. We have efforts that look at how we could improve the memory devices themselves and efforts that look at how we can take traditional high-performance memory devices and run applications more efficiently.
Part of the strategy is to implement a centralized memory pool would help mitigate the issue of over-provisioning the memory.
In HPC systems that deploy AI, high performance but low-capacity memory (typically gigabytes in capacity) is typically coupled to the GPUs, whereas a conventional system with low-performance but high capacity memory (terabytes) is loosely coupled via the traditional HPC workhorses, central processing units (CPUs), PNNL said. With PNNL, Micron will create proof-of-concept shared GPU and CPU systems and combine them with additional external storage devices in the hundreds of terabytes range. Future systems will need rapid access to petabytes of memory a thousand times more capacity than on a single GPU or CPU.
The intent is to create a third level of memory hierarchy, Brewer explains. The host would have some local memory, the GPU would have some local memory, but the main capacity memory is accessible to all compute resources across a switch, which would allow scaling of much larger systems. This unified memory would let researchers using deep-learning algorithms to run a simulation while its results simultaneously feed back to the algorithm.
A centralized memory system could also benefit operations because an algorithm or scientific simulation can share data with, say, another program thats tasked with analyzing those data. These converged application workflows are typical in DOEs scientific discovery challenges. Sharing memory and moving it around involves other technical resources, says Andrs Mrquez, a PNNL senior computer scientist. This centralized memory pool, on the other hand, would help mitigate the issue of over-provisioning the memory.
Because AI-aided data-driven science drives up demand for memory, an application cant afford to partition and strand the memory. The result: memory keeps piling up underutilized at various processing units. Having the capability of reducing that over-provisioning and getting more bang out of your buck by sharing that data across all those devices and different stages of workflow cannot be overemphasized, Mrquez explained.
Some of PNNLs AI algorithms can underperform when memory is slow to access, Mrquez says. In PNNLs computational chemistry group, for instance, researchers use AIto study waters molecular dynamics to see how it aggregates and interacts with other compounds. Water is a common solvent for commercial processes, so running simulations to understand how it acts with a molecule of interest is important. A separate research team at Richland is using AI and neural networks to modernize the power grids transmission lines.
Microns Brewer said he looks forward not only to the development of tools with PNNL but also for commercial use by any company working on large-scale data analysis. We are looking at algorithms, he said, and understanding how we can advance these memory technologies to better meet the needs of those applications.
PNNLs computational science problems provide Micron a way to observe applications that will most stress the memory.Those findings will help Brewer and colleagues develop products that help industry meet its memory requirements.
Ang, too, said he expects the project to help AI at large, pointing out that the Micron partnership isnt just a specialized one-off for DOE or scientific computing. The hope is that were going to break new ground and understand how we can support applications with pooled memory in a way that can be communicated to the community through enhancements to the CXL standard.
See the original post here:
PNNL and Micron Partner to Push Memory Boundaries for HPC and AI - insideHPC
Posted in Ai
Comments Off on PNNL and Micron Partner to Push Memory Boundaries for HPC and AI – insideHPC
Pittsburgh May Use AI to Fine Those Who Pass School Buses – Government Technology
Posted: at 11:06 pm
(TNS) The Pittsburgh Public Schools could soon implement an artificial intelligence system that captures information from vehicles that illegally pass stopped school buses and sends fines to their owners in an attempt to deter repeat infractions.
The city school board this month could agree to launch a pilot program with BusPatrol, a tech company that uses artificial and machine learning to promote safety for students traveling to and from school.
According to the company, which is based in Lorton, Va., and recently opened a Pennsylvania office in the city of Allentown, more than 136,000 school bus-related injuries and more than 1,000 fatalities have occurred in the past decade.
BusPatrol has partnered with school systems in several states and began working with districts in Pennsylvania a couple of years ago. The company installs software on school buses that has the capability of monitoring the vehicle's surroundings.
If a vehicle illegally passes a stopped bus, the device collects video of the offense and other information, including a license plate number, and turns it into an evidence package that it sends to police. If police then approve the citation, BusPatrol prints it and mails it to the vehicle owner, who can go online and view a video of their vehicle passing a stopped bus.
BusPatrol said the program has proven to increase safety because 98% of offenders do not get cited by the company a second time.
Mr. Souliere said the cost of the program is completely paid for by the ticket revenue, and the school district gets a large chunk of the $300 citation. A conservative estimate showed the district would receive $500,000 per 100 buses per year that it can invest back into schools, he said. About 500 buses carry students throughout the district.
School board member Pam Harbin said she had a "very strong reaction" to the idea that the district would have a program paid for by fining people who may simply make a mistake.
"I'm not going to agree to a system that's going to put people in that position," Ms. Harbin said. "I would rather have months and months of education and pay for that to improve behavior rather than saying we're going to harm people in a different way."
Mr. Souliere said the citation is a civil monetary penalty, no points are put on an offender's license, and vehicle insurance is not impacted. If necessary, he said, the fine can be paid over a certain period of time through a payment plan. But he noted that failure to pay the fine could result in the suspension of a license or plates not being renewed.
Before citations start being issued, the company blitzes media in communities where the program is implemented to provide an education or reminder of the law. The company places informational television commercials, works with local media and creates educational videos for schools and other entities.
School board member Tracey Reed said she was concerned about police using the information that the system collects for other purposes, such as fining individuals for other issues with their vehicles. Mr. Souliere, though, said police are not allowed to do that.
"The scope of use by law is exclusively for the enforcement of this specific stop arm infraction," he said. "In fact, if someone were to get caught by a police officer in passing a stopped school bus, the civil penalty would no longer apply it would be the criminal one that would apply."
The software has the ability to film inside the bus and can provide video in the instance of a fight or an accident, according to Mr. Souliere.
Michael McNamara, the district's chief operating officer, said that if the board approves the pilot program, the software will be placed on about 20 buses in various areas of the city. No citations would be issued during the pilot.
If the board approves the full program after the pilot period, the technology would be installed on all school buses with stop arms that serve Pittsburgh students.
"If this is successful and the board is on board with it no pun intended there we would then deploy [the software] over the summer to the rest of our buses that have stop arms, and then start collecting ticket revenue beginning the first day of the new school year next year," Mr. McNamara said. "No tickets would be issued, only warnings, the rest of this year, and then we would be able to educate the rest of the summer and have full deployment first day of school next fall."
2022 the Pittsburgh Post-Gazette, Distributed by Tribune Content Agency, LLC.
Go here to see the original:
Pittsburgh May Use AI to Fine Those Who Pass School Buses - Government Technology
Posted in Ai
Comments Off on Pittsburgh May Use AI to Fine Those Who Pass School Buses – Government Technology
Wolfpack Uses AI to Solve the Biggest Problem Facing Every Investor: What Should I Invest In? – PR Newswire
Posted: at 11:06 pm
PALO ALTO, Calif., March 8, 2022 /PRNewswire/ -- Wolfpack, the free mobile app with 12,000+ users on its waitlist, announced its iPhone app is available for download here. The app makes it easy for investors to discover investment opportunities by selecting several investment filters suited to their personal investment needs. An Artificial Intelligence (AI)-driven technology provides a list of investment opportunities.
"With Wolfpack, beginner investors discover, trade, and grow their wealth over time with an AI Discovery Engine."
"There are many investment apps out there, but they all fail to solve one fundamental problem, 'What should I invest in?' Wolfpack's proprietary AI allows users to discover investment opportunities that enables them to find securities they might otherwise miss," said George Parthimos, Chief Executive Officer (CEO) and founder of Wolfpack.
"With Wolfpack, beginner investors discover, trade, and grow their wealth over time with an AI Discovery Engine that finds trending investments. Users receive daily notifications that align with their personal investment goals," according to Nicholas Kapes, Chairman of Wolfpack.
In addition to the AI Discovery Engine, the app offers investors the ability to follow top-performing wolves and receive their top picks, join discussion boards, and receive referral credits when referring friends. The AI tool also ranks the most popular investments across the entire Wolfpack community with the free service.
Brokerage products and services are provided by Apex Clearing Corporation. Apex Clearing is the licensed broker-dealer for Wolfpack and is one of the largest clearing houses in the United States.
WOLFPACK'S TOP FEATURES:
Users can invest as little as $5. Wolfpack is designed for beginnersright through to the most experienced of investors.
"Wolfpack is an incredibly powerful tool with ground-breaking features that can change lives for investors looking to build long-term wealth," said Parthimos.
DOWNLOAD NOW
Wolfpack is available for download from the App Store today.
Android app coming end-March 2022.
About Wolfpack Financial Inc.
Headquartered in Palo Alto, California, Wolfpack Financial Inc. has launched its ingenious mobile app, Wolfpack. Designed to empower the millennial generation to build sustained wealth, the app offers investors the opportunity to discover stocks and ETFs which perfectly match their selection criteria.
Contact: Monica Matulich PRHollywood [emailprotected] 310-383-9502
SOURCE Wolfpack Financial Inc.
Original post:
Posted in Ai
Comments Off on Wolfpack Uses AI to Solve the Biggest Problem Facing Every Investor: What Should I Invest In? – PR Newswire
Will We Have to Relinquish Some Privacy for the Best AI? – The Motley Fool
Posted: at 11:06 pm
Social media giant Meta Platforms, formerly known as Facebook, is only the latest company to draw legal heat over its technology -- specifically, its artificial intelligence (AI) innovations. In this episode of "The AI/ML Show" on Motley Fool Live, recorded on Feb. 16, Fool.com contributors Toby Bordelon and Jason Hall discuss how the debate of AI versus privacy continues to rage on.
Toby Bordelon: We talked about data protection and privacy, I think, a decent amount with Facebook, and you can see what happens when that goes badly. If you don't follow those rules, $650 million with maybe more to come, and that can put a damper on what you can do. You want data to train AI well. You want data to be free-flowing, but then how does that work with our existing laws? Do we need to change them? Do we as a society need to get to a point where we say, you know what, we have to just allow use of personal data or it's not going to get us to where we want to be. We don't have to do that. It's a choice to be made. But there are trade-offs each way.
Jason Hall: It's like somebody refusing to not use cash or write checks. You can, but you're also unable to participate fully or easily in the way that most people do commerce.
Bordelon: I think with AI, too, there's a level beyond that. Because if say, Jose says, "I don't want my data being used to train this AI, I don't want it to be used at all." Does that impact how good the AI is, and does that impact my experience with the AI? Where is that line? It's not a new debate. It's a classic debate about where does individual rights end and communal rights begin, or what are you required to give up as an individual to live in the communal society. We've been having that debate...
Hall: As long as there's been a society.
Bordelon: Thousands of years. Exactly, and this is just another iteration of that, that we need to have a conversation around and struggle with, I think. You think about pace of innovation, which you touched on, Jason. The innovation in this field gets ahead of the law. We see when we have that a little bit with Facebook, but what ends up happening is that courts decide issues without a great legal framework because it's an issue of first impression. They have never seen it before. They are being asked to interpret existing laws that were written before the technology exists, and they have to wing it. That's not awesome. As a society, I don't think we want judges being forced into making decisions that have particularly billion-dollar impacts, and real impacts on people's lives using 30-year-old laws.
Hall: Using 30-year-old laws that weren't written to apply to a thing that didn't exist, and having no basis for understanding what they're ruling on.
Bordelon: Right. People yell at judges for getting that wrong, but that's unfair to put them in that position to begin with, I think. The laws just have to keep up. We have to find a way to anticipate things better, I think, not always react with our legislation, so that when things come up in the court, the judge can say, "OK, I have been given a framework by legislatures, with which I can work to try to find a resolution dispute." I've been saying I'm going to use a framework that's 50 years old because that's all anyone's given me. That's what the law is. We're just going to go with it and make the best of it we can. That's not the best way to do things. But that's kind of where we fall into a lot of technology, including AI. That's got to be addressed at some point.
The rest is here:
Will We Have to Relinquish Some Privacy for the Best AI? - The Motley Fool
Posted in Ai
Comments Off on Will We Have to Relinquish Some Privacy for the Best AI? – The Motley Fool
Nuclear fusion is one step closer with new AI breakthrough – Livescience.com
Posted: at 11:06 pm
The green energy revolution promised by nuclear fusion is now a step closer, thanks to the first successful use of a cutting-edge artificial intelligence system to shape the superheated hydrogen plasmas inside a fusion reactor.
The successful trial indicates that the use of AI could be a breakthrough in the long-running search for electricity generated from nuclear fusion bringing its introduction to replace fossil fuels and nuclear fission on modern power grids tantalizingly closer.
"I think AI will play a very big role in the future control of tokamaks and in fusion science in general," Federico Felici, a physicist at the Swiss Federal Institute of Technology in Lausanne (EPFL) and one of the leaders on the project, told Live Science. "There's a huge potential to unleash AI to get better control and to figure out how to operate such devices in a more effective way."
Related: Fission vs. fusion: What's the difference?
Felici is a lead author of a new study describing the project published in the journal Nature. He said future experiments at the Variable Configuration Tokamak (TCV) in Lausanne will look for further ways to integrate AI into the control of fusion reactors. "What we did was really a kind of proof of principle," he said. "We are very happy with this first step."
Felici and his colleagues at the EPFL's Swiss Plasma Center (SPC) collaborated with scientists and engineers at the British company DeepMind a subsidiary of Google owners Alphabet to test the artificial intelligence system on the TCV.
The doughnut-shaped fusion reactor is the type that seems most promising for controlling nuclear fusion; a tokamak design is being used for the massive international ITER ("the way" in Latin) project being built in France, and some proponents think they'll have a tokamak in commercial operation as soon as 2030.
The tokamak is principally controlled by 19 magnetic coils that can be used to shape and position the hydrogen plasma inside the fusion chamber, while directing an electric current through it, Felici explained.
The coils are usually governed by a set of independent computerized controllers one for each aspect of the plasma that features in an experiment that are programmed according to complex control engineering calculations, depending on the particular conditions being tested. But the new AI system was able to manipulate the plasma with a single controller, he said.
The AI a "deep reinforcement learning" (RL) system developed by DeepMind was first trained on simulations of the tokamak a cheaper and much safer alternative to the real thing.
But the computer simulations are slow: It takes several hours to simulate just a few seconds of real-time tokamak operation. In addition, the experimental condition of the TCV can change from day to day, and so the AI developers needed to take those changes into account in the simulations.
When the simulated training process was complete, however, the AI was coupled to the actual tokamak.
The TCV can sustain a superheated hydrogen plasma, typically at more than 216 million degrees Fahrenheit (120 million degrees Celsius), for a maximum of 3 seconds. After that, it needs 15 minutes to cool down and reset, and between 30 and 35 such "shots" are usually done each day, Felici said.
A total of about 100 shots were done with the TCV under AI control over several days, he said: "We wanted some kind of variety in the different plasma shapes we could get, and to try it under various conditions."
Related: Science fact or fiction? The plausibility of 10 sci-fi concepts
Although the TCV wasn't using plasmas of neutron-heavy hydrogen that would yield high levels of nuclear fusion, the AI experiments resulted in new ways of shaping plasmas inside the tokamak that could lead to much greater control of the entire fusion process, he said.
The AI proved adept at positioning and shaping the plasma inside the tokamak's fusion chamber in the most common configurations, including the so-called snowflake shape thought to be the most efficient configuration for fusion, Felici said.
In addition, it was able to shape the plasma into "droplets" separate upper and lower rings of plasma within the chamber which had never been attempted before, although standard control engineering techniques could also have worked, he said.
Creating the droplet shape "was very easy to do with the machine learning," Felici said. "We could just ask the controller to make the plasma like that, and the AI figured out how to do it."
The researchers also saw that the AI was using the magnetic coils to control the plasmas inside the chamber in a different way than would have resulted from the standard control system, he said.
"We can now try to apply the same concepts to much more complicated problems," he said. "Because we are getting much better models of how the tokamak behaves, we can apply these kinds of tools to more advanced problems."
The plasma experiments at the TCV will support the ITER project, a massive tokamak that's projected to achieve full-scale fusion in about 2035. Proponents hope ITER will pioneer new ways of using nuclear fusion to generate usable electricity without carbon emissions and with only low levels of radioactivity.
The TCV experiments will also inform designs for DEMO fusion reactors, which are seen as successors to ITER that will supply electricity to power grids something that ITER is not designed to do. Several countries are working on designs for DEMO reactors; one of the most advanced, Europe's EUROfusion reactor, is projected to begin operations in 2051.
Originally published on Live Science.
Go here to read the rest:
Nuclear fusion is one step closer with new AI breakthrough - Livescience.com
Posted in Ai
Comments Off on Nuclear fusion is one step closer with new AI breakthrough – Livescience.com