The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Daily Archives: June 18, 2022
Unified-IO is an AI system that can complete a range of tasks, including generating images – TechCrunch
Posted: June 18, 2022 at 2:01 am
The Allen Institute for AI (AI2), the division within the nonprofit Allen Institute focused on machine learning research, today published its work on an AI system, called Unified-IO, that it claims is among the first to perform a large and diverse set of AI tasks. Unified-IO can process and create images, text and other structured data, a feat that the research team behind it says is a step toward building capable, unified general-purpose AI systems.
We are interested in building task-agnostic [AI systems], which can enable practitioners to train [machine learning] models for new tasks with little to no knowledge of the underlying machinery, Jaisen Lu, a research scientist at AI2 who worked on Unified-IO, told TechCrunch via email. Such unified architectures alleviate the need for task-specific parameters and system modifications, can be jointly trained to perform a large variety of tasks and can share knowledge across tasks to boost performance.
AI2s early efforts in building unified AI systems led to GPV-1 and GPV-2, two general-purpose, vision-language systems that supported a handful of workloads including captioning images and answering questions. Unified-IO required going back to the drawing board, according to Lu and designing a new model from the ground up.
Unified-IO shares characteristics in common with OpenAIs GPT-3 in the sense that its a Transformer. Dating back to 2017, the Transformer has become the architecture of choice for complex reasoning tasks, demonstrating an aptitude for summarizing documents, generating music, classifying objects in images and analyzing protein sequences.
Like all AI systems, Unified-IO learned by example, ingesting billions of words, images and more in the form of tokens. These tokens served to represent data in a way Unified-IO could understand.
Unified-IO can generate images given a brief description. Image Credits: Unified-IO
The natural language processing (NLP) community has been very successful at building unified [AI systems] that support many different tasks, since many NLP tasks can be homogeneously represented words as input and words as output. But the nature and diversity of computer vision tasks has meant that multitask models in the past have been limited to a small set of tasks, and mostly tasks that produce language outputs (answer a question, caption an image, etc.), Chris Clark, who collaborated with Lu on Unified-IO at AI2, told TechCrunch in an email. Unified-IO demonstrates that by converting a range of diverse structured outputs like images, binary masks, bounding boxes, sets of key points, grayscale maps and more into homogenous sequences of tokens, we can model a host of classical computer vision tasks very similar to how we model tasks in NLP.
Unlike some systems, Unified-IO cant analyze or create videos and audio a limitation of the model from a modality perspective, Clark explained. But among the tasks Unified-IO can complete are generating images, detecting objects within images, estimating depth, paraphrasing documents and highlighting specific regions within photos.
This has huge implications to computer vision, since it begins to treat modalities as diverse as images, masks, language and bounding boxes as simply sequences of tokens akin to language, Clark added. Furthermore, unification at this scale can now open the doors to new avenues in computer vision like massive unified pre-training, knowledge transfer across tasks, few-shot learning and more.
MatthewGuzdial, an assistant professor of computing science at the University of Alberta who wasnt involved with AI2s research, was reluctant to call Unified-IO a breakthrough. He noted that the system is comparable to DeepMinds recently detailed Gato, a single model that can perform over 600 tasks from playing games to controlling robots.
The difference [between Unified-IO and Gato] is obviously that its a different set of tasks, but also that these tasks are largely much more usable. By that I mean theres clear, current use cases for the things that this Unified-IO network can do, whereas Gato could mostly just play games. This does make it more likely that Unified-IO or some model like it will actually impact peoples lives in terms of potential products and services, Guzdial said. My only concern is that while the demo is flashy, theres no notion of how well it does at these tasks compared to models trained on these individual tasks separately. Given how Gato underperformed models trained on the individual tasks, I expect the same thing will be true here.
Unified-IO can also segment images, even with challenging lightening. Image Credits: Unified-IO
Nevertheless, the AI2 researchers consider Unified-IO a strong foundation for future work. They plan to improve the efficiency of the system while adding support for more modalities, like audio and video, and scaling it up to improve performance.
Recent works such as Imagen and DALL-E 2 have shown that given enough training data, models can be trained to produce very impressive results. Yet, these models only support one task, Clark said. Unified-IO can enable us to train massive scale multitask models. Our hypothesis is that scaling up the data and model size tremendously will produce vastly better results.
See the article here:
Posted in Ai
Comments Off on Unified-IO is an AI system that can complete a range of tasks, including generating images – TechCrunch
Bowdoin Selected for National Initiative on AI Ethics – Bowdoin College
Posted: at 2:01 am
L-r: Eric Chown, Allison Cooper, Michael Franz, Fernando Nascimento
This is exactly the sort of area we focus on at the DCS program, so Im sure thats one of the reasons we were chosen for this award, said Chown. One example of this kind of work thats already underway is the Computing Ethics Narratives, another national initiative involving Bowdoin faculty aimed at integrating ethics into undergraduate computer science curricula at American colleges and universities.
Other faculty involved in the NHC project are cinema studies scholar Allison Cooper, who is also an assistant professor of Romance languages and literatures, and Professor of Government Michael Franz. While his colleagues will work on the broader ethical issues regarding AI, Chowns focus will be more on teaching the nuts and bolts behind the subject.
My work in machine learning and artificial intelligence will serve basically to study what's going on in AI and how it works. Then we'll look at various applications and, using the work of Fernando and Alison, students will be asked to consider questions like What are the developers goals when they're doing this? How is this impacting users? Franz, meanwhile, will focus on issues surrounding government regulation in the AI sphere and what the political implications might be.
The selection of Bowdoin as one of the fifteen institutions sponsored by the initiative indicates the relevance of liberal arts to the discussion, said Nascimento, who heads to NHC headquarters in North Carolina on June 20, representing the College at a five-day conference to discuss next steps. Its important that we define our objectives and our limitations as we develop this transformative technology so that it effectively promotes the common good.
"Students will be asked to consider questions like What are the developers goals...? How is this impacting users?
I was thrilled to learn that Bowdoin was one of the institutions selected by the National Humanities Center, and also to have the opportunity to work with colleagues in DCS and government on the project, said Cooper, who uses computational methods to analyze film language in her research and contributed moving image narratives from film and television to the Computing Ethics Narratives project.
We all share the belief that contemporary films and media can raise especially thought-provoking questions about AI for our students, she added, citing movies such as 2001: A Space Odyssey, Ex Machina, and The Matrix. Cooper anticipates the new collaborative course will involve integrating this type of study with classes about actual technologies. This should offer our students a truly unique opportunity to move back and forth between speculative and applied approaches to understanding AI. (Learn more about Kinolaban online searchable database of media clips launched by Cooper for cinema students and scholars.)
Participants in the new project will, over the next twelve months, design a semester-long course to be taught during the following academic year. They will then reconvene in the summer of 2024 to share their experiences and discuss the future of the project. Cooper and Franz anticipate that their experience coteaching with their DCS colleagues will lead to the future development of stand-alone courses focusing on AI in their respective fields of cinema studies and government.
Its really exciting for Bowdoin to be involved with such a diverse cross section of schools in this project, said Director of Academic Advancement and Strategic Priorities Allison Crosscup, whose responsibilities include the development of grant-seeking opportunities at the College. Crosscup identified three factors above all that make Bowdoin an ideal partner in the project.At the faculty level weve got the Computing Ethics Narratives project; at the academic level weve got DCS, which in 2019 became a full-fledgedacademic program; and at the institutional level we have the K report,* which also promotes ethical decision-making, so were hitting all three levels.Overall, she concluded,this project presents a great opportunity to leverage work thats already being done here and to build on it.
According to the projects timeline, students will be able to enroll in the new collaborative course on ethics in AI during the 20232024 academic year. The class will be taught over one semester by the four faculty members highlighted above.
*Refers to the KSCD report, an initiative launched by President Clayton Rose in 2018 to identify the knowledge, skills, and creative dispositions every Bowdoin student should possess in a decade's time.
Read more:
Bowdoin Selected for National Initiative on AI Ethics - Bowdoin College
Posted in Ai
Comments Off on Bowdoin Selected for National Initiative on AI Ethics – Bowdoin College
AI is Transforming the Construction Industry | Contractor – Contractor Magazine
Posted: at 2:01 am
By Melanie Johnson
Planning, building, operation, and maintenance are all changing the construction business. Artificial Intelligence is the backbone for establishing true digital initiatives in construction engineering management (CEM) to improve construction project performance. AI drives computers to perceive and acquire human-like inputs for perception, knowledge representation, reasoning, problem-solving, and planning, which can deal with difficult and ill-defined situations intelligently, and adaptively. AI investment is growing rapidly, with machine learning accounting for a large chunk to learn data from numerous sources and make smart, adaptive judgments.
AI can streamline operations in construction engineering management in multiple ways:
AI automates and objectivizes project management. AI-based technologies help traditional construction management overcome bias and confusion from manual observation and operation. Machine learning algorithms are used to intelligently study gathered data to uncover hidden information. They are also incorporated into project management software to automate data analysis and decision-making. Advanced analytics enable managers to better comprehend construction projects, codify tacit project knowledge, and quickly recognize project issues. Drones and sensors are used for on-site construction monitoring to automatically capture data and take images/videos of the site's state, surroundings, and progress without human input. Such strategies may replace time-consuming, boring, and error-prone human observation.
AI approaches are also used to improve the efficiency and smoothness of building projects. Process mining uses AI to monitor critical procedures, anticipate deviations, uncover unseen bottlenecks, and extract cooperation patterns. Such information is crucial to project success and may optimize construction execution. Early troubleshooting choices may increase operational efficiency. It prevents expensive corrections afterward. Different forms of optimization algorithms are also a great tool for building up more believable construction designs. AI-powered robots are being used on construction sites to do repetitive activities like bricklaying, welding, and tiling. Smart machines can operate nonstop at almost the same pace and quality as humans, ensuring efficiency, productivity, and even profitability.
The automated and robust computer vision techniques are gradually taken the place of laborious and unreliable visual inspection in civil infrastructure condition assessment. Current advances in computer vision techniques lie in the deep learning methods to automatically process, analyze, and understand the image and video annotations through end-to-end learning. Towards the goal of intelligent management in the construction project, computer vision is mainly used to perform visual tasks for two main purposes named inspection and monitoring, which can potentially promote the understanding of complex construction tasks or structural conditions comprehensively, rapidly, and reliably.
To be more specific, inspection applications perform automated damage detection, structural component recognition, unsafe behavior, and condition identification. Monitoring applications is a non-contact method to capture a quantitative understanding of the infrastructure status, such as estimating strain, displacement, cracks length, and width. To sum up, the vision-based methods in CEM are comparatively cost-effective, simple, efficient, and accurate, which can robustly translate image data into actional information for structural health evaluation and construction safety assurance.
McKinsey predicted in 2017 that AI-enhanced analytics might boost construction efficiency by 50%. This is good news for construction businesses that can't find enough human employees, (which is the norm). AI-powered robots like Boston Dynamics' Spot the Dog help project managers assess numerous task sites in real-time, including whether to move personnel to different areas of projects or to other locations. Robot "dogs" monitor sites during and after work to locate pain locations.
AI can monitor, detect, analyze, and anticipate possible risks in terms of safety, quality, efficiency, and cost across teams and work areas, even under high uncertainty]. Various AI methods, such as probabilistic models, fuzzy theory, machine learning, neural networks, and others, have been used to learn data from the construction site to capture interdependencies of causes and accidents, measure the probability of failure, and evaluate the risk from both a qualitative and quantitative perspective. They may overcome the ambiguity and subjectivity of conventional risk analysis.
AI-based risk analysis can provide assistive and predictive insights on critical issues, helping project managers quickly prioritize possible risks and determine proactive actions instead of reactions for risk mitigation, such as streamlining job site operations, adjusting staff arrangements, and keeping projects on time and within budget. AI enables early troubleshooting to avert failure and mishaps in complicated workflows. Robots can handle harmful tasks to reduce the number of people in danger at construction sites.
OSHA says construction workers are killed five times more than other employees. Accidents include falls, being hit by an item, electrocution, and "caught-in/between" situations when employees are squeezed, caught, squashed or pinched between objects, including equipment. Machine learning platforms like Newmetrix may detect dangers before accidents happen or examine areas after catastrophes. The program can monitor photographs and videos and use predictive analytics to indicate possible concerns for site administrators. Users may use a single dashboard to create reports on possible safety issues, such as dangerous scaffolding, standing water, and missing PPE like gloves, safety glasses, and hard helmets.
On-site threats include unsafe buildings and moving equipment. So, AI improves job site safety. More construction sites include cameras, IoT devices, and sensors to monitor activities. AI-enabled devices can monitor 24/7 without distraction. AI technologies can identify risky conduct and inform the construction crew via face and object recognition. This may save lives and boost efficiency while reducing liability.
Droxel uses AI to follow building projects and assess quality and progress in real-time. Droxel makes camera-equipped robots that can travel independently across building sites to acquire 3D "point clouds." Doxel employs a neural network to cross-reference project data against BIM and bill of materials information after a digital model is ready. The collected information helps project managers monitor large-scale projects with thousands of elements. These insights include: how much is owing or whether the budget is in danger; projects' timeliness; and early detection of quality issues which allows for correction and mitigation.
BIM is a new (and better) approach to producing the 3D models that construction professionals use to design, build, and repair buildings. Today, BIM platform programmers include AI-driven functionalities. BIM uses tools and technology, like ML, to assist teams to minimize redundant effort. Sub-teams working on common projects typically duplicate others' models. BIM "teaches" robots to apply algorithms to develop numerous designs. AI learns from each model iteration until it creates the perfect one.
BIM is at the center of a trend toward more digitization in the construction business, according to a Dodge Data and Autodesk report. Nearly half (47%) of "high-intensity" construction BIM users are close to completing digital transformation objectives.
AI may help eliminate a tech hurdle when working on one-off, customized projects, says AspenTech's Paul Donnelly. AI can accelerate tech set up for new projects by using data from past projects and industry norms. This makes newer tech in construction feasible compared to when it must be manually set up for each job. Robotics, AI, and IoT can cut construction costs by 20%.
Virtual reality goggles let engineers send mini-robots to construction sites. These robots monitor progress using cameras. Modern structures employ AI to design electrical and plumbing systems. AI helps companies create workplace safety solutions. AI is utilized to monitor real-time human, machine, and object interactions and inform supervisors of possible safety, construction, and productivity concerns.
AI won't replace humans despite forecasts of major employment losses. It will change construction business models, eliminate costly mistakes, decrease worker accidents, and improve building operations. Construction firm leaders should prioritize AI investments based on their individual demands. Early adopters will decide the industry's short and long-term orientation.
The construction industry is on the cusp of digitization, which will disrupt existing procedures while also presenting several opportunities. Artificial intelligence is predicted to improve efficiency across the whole value chain, from building materials manufacturing to design, planning, and construction, as well as facility management.
But, in your firm, how can you get the most out of AI? The advantages range from simple spam email screening to comprehensive safety monitoring. The construction sector has only scratched the surface of AI applications. This technology aids in the reduction of physical labor, the reduction of risk, the reduction of human mistakes, and the freeing up of time for other vital duties. AI allows teams to focus on the most critical, strategic aspects of their work. At its best, AI and machine learning can assist us in becoming our best selves.
Melanie Johnson, AI and computer vision enthusiast with a wealth of experience in technical writing. Passionate about innovation and AI-powered solutions. Loves sharing expert insights and educating individuals on tech.
Read more:
AI is Transforming the Construction Industry | Contractor - Contractor Magazine
Posted in Ai
Comments Off on AI is Transforming the Construction Industry | Contractor – Contractor Magazine
IRS expands AI-powered bots to set up payment plans with taxpayers over the phone – Federal News Network
Posted: at 2:01 am
The Internal Revenue Service is handling more of its call volume through automation, which gives its call-center employees more time to address more complex requests from taxpayers.
The IRS announced Friday that individuals delinquent on their taxes, who receive a mailed notice from the agency, can call an artificial intelligence-powered bot and set up a payment without having to wait on the phone to speak with an IRS employee.
Taxpayers are eligible to set up...
READ MORE
The Internal Revenue Service is handling more of its call volume through automation, which gives its call-center employees more time to address more complex requests from taxpayers.
The IRS announced Friday that individuals delinquent on their taxes, who receive a mailed notice from the agency, can call an artificial intelligence-powered bot and set up a payment without having to wait on the phone to speak with an IRS employee.
Taxpayers are eligible to set up a payment plan through the voice bot if they owe the IRS less than $25,000, which IRS officials said covers the vast majority of taxpayers with balances owed.
Taxpayers who call the Automated Collection System (ACS) and Accounts Management toll-free lines and want to discuss their payment plan options can verify their identities with a personal identification number on the notice they received in the mail.
Darren Guillot, IRS Deputy Commissioner of Small Business/Self Employed Collection & Operations Support, told reporters Friday that the agencys expanded use of voice bots and chatbots will allow the IRS workforce to assist more taxpayers over the phone.
The IRS earlier this year answered about three out of every 10 calls from taxpayers.
If you dont have more people to answer phone calls, what are the types of taxpayer issues that are so straightforward that artificial intelligence could do it for us, to free up more of our human assisters to interact with taxpayers who need to talk to us about much more complex issues, Guillot said.
IRS Commissioner Chuck Rettig said the automation initiative is part of a wider effort to improve taxpayer experience at the agency.
We continue to look for ways to better assist taxpayers, and that includes helping people avoid waiting on hold or having to make a second phone call to get what they need, Rettig said in a statement.
The voice bots run on software powered by AI that allows callers to communicate with them
Guillot said the IRS in December 2021 and January 2022 launched bots that could assist taxpayers with questions that dont require authentication of the taxpayers identity or access to their private information.
These bots could answer basic questions like how to set up a one-time payment, and answered more than 3 million calls before the end of May.
But this week, the IRS expanded its capabilities and launched bots that can authenticate a taxpayers identity and set up a payment plan for individuals.
It verifies you really are who you say you are, by asking for some basic information and a number that you will have on the notice you received. That gives you a phone number to call and speak with the bot, Guillot said.
Guillot said taxpayers can name their own price for the payment plan, as long as a taxpayer pays their balance within the timeframe of the relevant collection statute or up to 72 months.
Once a payment plan is set up, the bot will close the taxpayers account without any further enforcement action from the IRS.
Those taxpayers didnt wait on hold for one second, Guillot said.
Guillot said the IRS is ramping up its bot capability incrementally to ensure the automation can handle the volume of calls it receives. The bot, he added, is currently at about one-quarter of its full capability, and will reach 100% capacity by next week.
The bots are available 24/7 and can communicate with taxpayers in English and Spanish.
Later this year, Guillot said the bots will be able to provide taxpayers with a transcript of their accounts that includes the balance of their accounts.
Guillot said the IRS worked closely with National Taxpayer Advocate Erin Collins on the rollout of the voice bot.
She raised legitimate concerns that some taxpayers, because they can name their price, may get themselves into a payment plan thats more than they can afford, he said.
The IRS is working to ensure that the bots ask some additional questions to ensure taxpayers are able to afford the payment plans they set for themselves.
Guillot said that this weeks rollout marks the first time in IRS history that the agency has been able to interact with the taxpayers using AI to access their accounts and resolve certain situations without having to wait on hold.
I have friends and family that have to interact with the Internal Revenue Service, and when I hear them talk about how long theyre on hold that bugs me. It should bug all of us, Guillot said.
Guillot said the IRS also added a quick response QR code to the mailed notices that went out to taxpayers. The QR code takes taxpayers to a page on IRS.gov showing them how to make a payment.
Guillot said the IRS originally expected to launch this capability by 2024, but was able to expedite the rollout given the perceived demand for this service.
The IRS in recent years has seen low levels of phone service that have decreased further since the start of the COVID-19 pandemic.
IRS is looking to further expand the range of services voice bots can provide, and is part of a broader effort to improve taxpayer service.
We never lose sight of our first interaction with every single taxpayer is never enforcement. Its a last resort. Our first effort is always around that word service, and trying to help customers understand the tax law and almost always work out a resolution with them meaningfully, Guillot said.
Go here to see the original:
Posted in Ai
Comments Off on IRS expands AI-powered bots to set up payment plans with taxpayers over the phone – Federal News Network
If you really want to transform your business, get AI to transform your infrastructure first Blocks and Files – Blocks and Files
Posted: at 2:01 am
Sponsored Feature
AI isnt magic. But applied correctly it can make IT infrastructure disappear.
Not literally of course. But Ronak Chokshi, who leads product marketing for InfoSight at HPE, argues that when considering how to better manage their infrastructure, tech leaders need to consider what services like Uber or Google Maps have achieved.
The IT infrastructure behind the delivery of these services is immaterial to the rest of the world except perhaps for frazzled tech leaders in other sectors who wonder how they could achieve similarly seamless operations.
The consumers dont really care how it works, as long as the service is available when needed, and its easy to manage, he says.
Pushing infrastructure behind the scenes is the raison detre of the HPE InfoSight AIOps platform. Or, to put another way, says Chokshi, InfoSight worries about the infrastructure, so tech teams can be more application-centric.
We want the IT teams to be a partner to the business, to the line of business stakeholders and application developers, in executing their digital transformation initiatives, he explains.
Thats a stark contrast to the all too common picture of admins fretting over whether a given host is being overburdened with VMs, or crippled by too many read-write cycles.
Its not that this information is unimportant. Rather its a question of how its gathered, and who or what is responsible for collating the data and gaining insight from it. And, most of all, taking positive action as a result.
From the customers point of view, explains Chokshi, InfoSight becomes your single pane of glass for all insights, for any issues that come up, any metrics, any attributes, or any activity that you need to track in terms of IOs, read write, throughput, latencies, from storage all the way up to applications. This includes servers, networking, and the virtualization layer.
It all starts with telemetry
More importantly though, the underlying system predicts problems as they arise, or even before, and takes appropriate action to prevent them.
The starting point for InfoSight is telemetry, which is pulled from every layer of the technology and application stack. Chokshi emphasizes that this refers to performance data from HPEs devices, not production or customer data. Thats IO read writes, throughput latencies, wait times, things of that nature.
Telemetry itself potentially presents an IO and performance challenge. Badly implemented real time telemetry could impact performance. Spooling off data intermittently when systems are running quiet means the chance for real-time insight and remediation is lost.
We actually instrument our systems very intelligently to send us specific kinds of telemetry data without performance degradation, says Chokshi. This extends right down to the way HPE structures its storage operating system..
HPE InfoSight aggregates the telemetry data from across HPEs global install base, together with information from HPEs own (human-based) support operation.
When there is an issue and our support personnel get a call from a customer, they troubleshoot it, and fix it but when the fix is implemented, we dont just stop there. That is where the real work begins. We actually create a signature pattern. Its essentially a fingerprint for that issue, and we push it to our cloud.
This provides a vast data pool against which InfoSight can apply AI and machine learning, which then powers support case automation.
As telemetry data from other devices across the installed base continues to stream into HPE, Chokshi continues, we create signature patterns for issues that might come up from those individual systems.
When the data coming from a customer matches an established signature pattern within a specific environment, InfoSight will push out a wellness alert that appears on the customers dashboard. At the same time, a support case is opened.
Along with alerting customers, InfoSight will also take proactive actions, tuned to customers individual environments. For example, if it detects that a storage OS update could result in a conflict or incompatibility with the VM platform a customer is running, it will halt or skip the upgrade.
Less time solving storage problems
The potential impact should be pretty obvious to anyone whos had to troubleshoot an underperforming system, or a mysterious failure, which could be down to storagebut might not be.
Research by ESG shows that across HPEs Nimble Storage installed base, HPE InfoSight lowered IT operational expenses by 79 percent, while staffers spent 85 percent less time resolving storage-related tickets. An IDC survey also showed that more than 90 percent of the problems resolved lay above of storage. So, just taking storage as a starting point, InfoSight can have a dramatic impact right up the infrastructure stack.
At the same time, InfoSight has been extended to encompass the software layer, with the launch of App Insights last year. As Chokshi says, its often a little too easy for application administrators to shift problems to storage administrators, saying hey, looks like your storage device is not behaving properly.
App Insights creates a topology view of the entire stack and produces alerts and predictions of problems at every layer. So, when an app admin suggests that their app performance is being degraded by a storage problem, Chokshi explains, The storage admin pretty much almost instantly would have a response to that question saying they can look up App Insights dashboard.
So, the admin can identify, for example, whether a drive has failed, or alternatively that a host is running too many VMs, and thats slowing your applications down.
For a mega scale example of how InfoSight can render infrastructure invisible, look no further than HPEs Greenlake edge to cloud platform, which combines on-prem infrastructure management and deployment with management and further services in the cloud.
For example, HPE has recently begun offering HPE GreenLake for Block Storage. Traditionally, deploying block storage for mission- or business-critical systems meant working out multiple parameters, says Chokshi. How much capacity? How much performance do you need from storage? How many applications do you plan to run, etc, etc..
With the new block service, admins just need to set three or four parameters, including whether the app is mission-critical or business-critical and choosing an SLA.
And you provision that, and thats all done through the cloud. And it essentially makes the block storage available to you. Behind the scenes, HPE InfoSight powers that experience from enabling the cloud operation experience and ensuring that systems and apps dont go down. It predicts failures, and prevents them from occurring.
Greenlake expansion on the way
Over the course of this year, InfoSight will be extended to more and more HPE Greenlake services. This is a big deal because what was originally brought to market for storage, then servers, is now being integrated with nearly every HPE product that is provisioned through HPE Greenlake
At the same time, HPE will extend the InfoSight-powered support automation it has long offered on its Nimble Storage, which sees customers bypassing level 1 and 2 technicians, and being put straight through to level 3 support. Because by the time you call, we already know the basics of the issue and we already know your environment. We dont have to ask you questions. We dont have to ask for logs, we dont have to ask for any sort of data. We actually already have it through the telemetry data.
So is this as good as it gets? No, it will actually get better in the future, argues Chokshi, because as InfoSight is rolled out to more services and products, and to more customers, it will be accessing ever more telemetry and analyzing ever more customer contexts.
To actually get the advantages of AIOps, you need large sets of relevant data, he says. And you need to let time go by because AI is not a one and done. It improves over time.
Sponsored by HPE.
Follow this link:
Posted in Ai
Comments Off on If you really want to transform your business, get AI to transform your infrastructure first Blocks and Files – Blocks and Files
AI-enabled cameras and lidar can improve traffic today and support the AVs of tomorrow – Smart Cities Dive
Posted: at 2:01 am
Georges Aoude and Karl Jeanbart are co-founders of Derq, a software development company that provides cities and fleets with an AI-powered infrastructure platform for road safety and traffic management that supports the deployment of autonomous vehicles at scale.
While in-vehicle technology for autonomous vehicles gets substantial attention, service providers and municipalities are just starting to discuss the road infrastructure technology that supports AVs and provides other traffic management benefits.
With advancements in artificial intelligence and 5G network connectivity, smart-road infrastructure technologies offer the promise of improving real-time traffic analytics and tackling the most challenging road safety and traffic management problems when theyre added to roads, bridges and other transit systems across the U.S.
Two technologies at the center of this discussion are AI-enhanced cameras and lidar: light detection and ranging devices.
The U.S. has hundreds of thousands of traffic cameras millions when you also count closed-circuit TV cameras used mainly for road monitoring and basic traffic management applications, such as loop emulation. Bringing the latest AI advancements to both cameras and data management systems, these assets can immediately improve basic application performance and unlock more advanced software applications and use cases.
AI and machine learning deliver superior sensing performance over legacy cameras computer vision techniques. By using algorithms that can automatically adapt to various lighting and weather conditions, they enable more robust, flexible and accurate detection, tracking and classification of all road users distinguishing between a driver, pedestrian, and cyclist on or surrounding the road. In addition, their predictive capabilities can better model road-user movements and behaviors and improve road safety. Transportation agencies can immediately benefit from AI-enhanced cameras with applications such as road conflict detection and analysis, pedestrian crossing prediction and infrastructure sensing for AV deployments.
Lidar can provide complementary and sometimes overlapping value with cameras, but in several safety-critical edge cases, such as in heavy rain and snow or when providing more granular classification, our experience has been that cameras still provide superior results. Lidar works better in challenging light conditions and for providing localization data, but todays lidar technology remains expensive to deploy at scale due to its high unit price and limited field of view. For example, it would take multiple lidar sensors deployed in a single intersection, at a hefty investment, to provide the equivalent information of just one 360-degree AI-enhanced camera, which is a more cost-effective solution.
For many budget-focused communities, AI-enhanced cameras remain the technology of choice. Over time, as the cost of lidar technology moderates, communities should consider whether to augment their infrastructure with lidar sensors.
As the cost of lidar technology comes down, it will become a strong and viable addition to todays AI-enhanced cameras. Ultimately, the go-to approach for smart infrastructure solutions will be sensor fusion the ability to combine data from both cameras and lidar in one data management system, as is happening now in autonomous vehicles to maximize the benefits of both to improve overall traffic flow and eliminate road crashes and fatalities.
SOURCE: Derq*Assumes presence of IR or good low-light sensor**Expected to improve with time
Contributed pieces do not reflect an editorial position by Smart Cities Dive.
Do you have an opinion on a similar issue or another topic Smart Cities Dive is covering?Submit an op-ed.
Link:
Posted in Ai
Comments Off on AI-enabled cameras and lidar can improve traffic today and support the AVs of tomorrow – Smart Cities Dive
Amy raises $6M to help enterprises sell better with AI – VentureBeat
Posted: at 2:01 am
We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
Israeli startup Amy, which provides an AI-driven solution to help enterprise reps build better customer connections and sell at a better rate, has raised $6 million in a seed round of funding.
Sales are not a piece of cake. You have to identify a prospect, understand them (including where they come from and what are their needs/wants) and come up with a perfect pitch to establish a long-lasting business relationship. Reps spend about 20% of their working hours on this kind of research, but still find less than 50% of their initial prospects as a good fit. Plus, maintaining these connections is even more difficult as, when the network grows big, one cannot keep tabs on all their customers and touch base for continued sales.
To solve this particular challenge, Amy offers a solution that automates the task of prospect research and provides actionable insights for generating deeper, long-lasting business relationships and making the most of them.
The platform, as the company explains, leverages all publicly available information about a prospect and transforms those strands of random data into digestible meeting briefs providing tangible personalized insights into the prospect. It covers relevant information at both company and individual levels, including things like job changes, funding, acquisitions, common experiences and news elements highlighting if they were featured for something new/interesting.
Our proprietary [natural language processing] NLP technology takes publicly available data from the web on the prospect and summarizes it as a Booster, Nimrod Ron, CEO and founder of Amy, told Venturebeat. Then, we further prioritize what is most useful and present one to three Boosters, along with information about the prospects career and company, as part of the main brief.
Amys customers can apply that information as an icebreaker or to elevate the relationship throughout the meeting, he added.
While the CEO did not share the exact growth numbers, he did note that their product is being used by users from companies of all sizes in English-speaking countries. It was also ranked No. 1 on Product Hunt during alpha testing.
Though customer relationship management (CRM) tools like Hubspot and sales intelligence platforms like Apollo or Lusha operate in the same space and simplify the task of identifying a potential customer, Amy stands out with its capabilities to build strong customer connections.
These solutions are typically built for prospecting and dont provide the level of deep analysis on both the individual and company level to create strong, personal business connections, Ron said.
With this round of funding, which was led by Next Coast Ventures and Lorne Abony, the company will focus on building out its technology, particularly the NLP and machine learning bits, and expanding its presence in more English-speaking markets. Other investors who participated in the round were Jim Mellon, Eric Ludwig, Micha Breakstone, Joey Low and James Kong.
There is a clear market need to optimize meeting experiences, given that the average professional spends hours each day in meetings, Michael Smerklo, cofounder and managing director at Next Coast Ventures, said. Amys offering addresses this need by tapping into the art of human connection in business. The platform makes business personal by enabling professionals to understand who they are about to speak with and why that person is interested in speaking with them making meetings more effective and efficient.
Globally, the sales intelligence market is expected to grow 10.6% from $2.78 billion in 2020 to $7.35 billion by 2030.
Link:
Amy raises $6M to help enterprises sell better with AI - VentureBeat
Posted in Ai
Comments Off on Amy raises $6M to help enterprises sell better with AI – VentureBeat
Teaching Physics to AI Can Allow It To Make New Discoveries All on Its Own – SciTechDaily
Posted: at 2:01 am
Duke University researchers have discovered that machine learning algorithms can gain new degrees of transparency and insight into the properties of materials after teaching them known physics.
Incorporating established physics into neural network algorithms helps them to uncover new insights into material properties
According to researchers at Duke University, incorporating known physics into machine learning algorithms can help the enigmatic black boxes attain new levels of transparency and insight into the characteristics of materials.
Researchers used a sophisticated machine learning algorithm in one of the first efforts of its type to identify the characteristics of a class of engineered materials known as metamaterials and to predict how they interact with electromagnetic fields.
The algorithm was essentially forced to show its work since it first had to take into account the known physical restrictions of the metamaterial. The method not only enabled the algorithm to predict the properties of the metamaterial with high accuracy, but it also did it more quickly and with additional insights than earlier approaches.
Silicon metamaterials such as this, featuring rows of cylinders extending into the distance, can manipulate light depending on the features of the cylinders. Research has now shown that incorporating known physics into a machine learning algorithm can reveal new insights into how to design them. Credit: Omar Khatib
The results were published in the journal Advanced Optical Materials on May 13th, 2022.
By incorporating known physics directly into the machine learning, the algorithm can find solutions with less training data and in less time, said Willie Padilla, professor of electrical and computer engineering at Duke. While this study was mainly a demonstration showing that the approach could recreate known solutions, it also revealed some insights into the inner workings of non-metallic metamaterials that nobody knew before.
Metamaterials are synthetic materials composed of many individual engineered features, which together produce properties not found in nature through their structure rather than their chemistry. In this case, the metamaterial consists of a large grid of silicon cylinders that resemble a Lego baseplate.
Depending on the size and spacing of the cylinders, the metamaterial interacts with electromagnetic waves in various ways, such as absorbing, emitting, or deflecting specific wavelengths. In the new paper, the researchers sought to build a type of machine learning model called a neural network to discover how a range of heights and widths of a single-cylinder affects these interactions. But they also wanted its answers to make sense.
Neural networks try to find patterns in the data, but sometimes the patterns they find dont obey the laws of physics, making the model it creates unreliable, said Jordan Malof, assistant research professor of electrical and computer engineering at Duke. By forcing the neural network to obey the laws of physics, we prevented it from finding relationships that may fit the data but arent actually true.
The physics that the research team imposed upon the neural network is called a Lorentz model a set of equations that describe how the intrinsic properties of a material resonate with an electromagnetic field. Rather than jumping straight to predicting a cylinders response, the model had to learn to predict the Lorentz parameters that it then used to calculate the cylinders response.
Incorporating that extra step, however, is much easier said than done.
When you make a neural network more interpretable, which is in some sense what weve done here, it can be more challenging to fine-tune, said Omar Khatib, a postdoctoral researcher working in Padillas laboratory. We definitely had a difficult time optimizing the training to learn the patterns.
Once the model was working, however, it proved to be more efficient than previous neural networks the group had created for the same tasks. In particular, the group found this approach can dramatically reduce the number of parameters needed for the model to determine the metamaterial properties.
They also found that this physics-based approach to artificial intelligence is capable of making discoveries all on its own.
As an electromagnetic wave travels through an object, it doesnt necessarily interact with it in exactly the same way at the beginning of its journey as it does at its end. This phenomenon is known as spatial dispersion. Because the researchers had to tweak the spatial dispersion parameters to get the model to work accurately, they discovered insights into the physics of the process that they hadnt previously known.
Now that weve demonstrated that this can be done, we want to apply this approach to systems where the physics is unknown, Padilla said.
Lots of people are using neural networks to predict material properties, but getting enough training data from simulations is a giant pain, Malof added. This work also shows a path toward creating models that dont need as much data, which is useful across the board.
Reference: Learning the Physics of All-Dielectric Metamaterials with Deep Lorentz Neural Networks by Omar Khatib, Simiao Ren, Jordan Malof and Willie J. Padilla, 13 May 2022, Advanced Optical Materials.DOI: 10.1002/adom.202200097
This research was supported by the Department of Energy (DESC0014372).
Continue reading here:
Teaching Physics to AI Can Allow It To Make New Discoveries All on Its Own - SciTechDaily
Posted in Ai
Comments Off on Teaching Physics to AI Can Allow It To Make New Discoveries All on Its Own – SciTechDaily
Leave that sentient AI alone a mo and fix those racist chatbots first – The Register
Posted: at 2:01 am
Something for the Weekend A robot is performing interpretive dance on my doorstep.
WOULD YOU TAKE THIS PARCEL FOR YOUR NEIGHBOR? it asks, jumping from one foot to the other.
"Sure," I say. "Er are you OK?"
I AM EXPRESSING EMOTION, states the delivery bot, handing over the package but offering no further elaboration.
What emotion could it be? One foot, then the other, then the other two (it has four). Back and forth.
"Do you need to go to the toilet?"
I AM EXPRESSING REGRET FOR ASKING YOU TO TAKE IN A PARCEL FOR YOUR NEIGHBOUR.
"That's 'regret,' is it? Well, there's no need. I don't mind at all."
It continues its dance in front of me.
"Up the stairs and first on your right."
THANK YOU, I WAS DYING TO PEE, it replies as it gingerly steps past me and scuttles upstairs to relieve itself. It's a tough life making deliveries, whether you're a "hume" or a bot.
Earlier this year, researchers at the University of Tsukuba built a handheld text-messaging device, put a little robot face on the top and included a moving weight inside. By shifting the internal weight, the robot messenger would attempt to convey subtle emotions while speaking messages aloud.
In particular, tests revealed that frustrating messages such as: "Sorry, I will be late" were accepted by recipients with more grace and patience when the little weight-shift was activated inside the device. The theory is that this helped users appreciate the apologetic tone of the message and thus calmed down their reaction to it.
Write such research off as a gimmick if you like but it's not far removed from adding smileys and emojis to messages. Everyone knows you can take the anger out of "WTF!?" by adding 🙂 straight after it.
The challenge, then, is to determine whether the public at large agrees on what emotions each permutation of weight shift in a handheld device are supposed to convey. Does a lean to the left mean cheerfulness? Or uncertainty? Or that your uncle has an airship?
A decade ago, the United Kingdom had a nice but dim prime minister who thought "LOL" was an acronym for "lots of love." He'd been typing it at the end of all his private messages to staff, colleagues, and third parties in the expectation that it would make him come across as warm and friendly. Everyone naturally assumed he was taking the piss.
If nothing else, the University of Tsukuba research recognizes that you don't need an advanced artificial intelligence to interact with humans convincingly. All you need to do is manipulate human psychology to fool them into thinking they are conversing with another human. Thus the Turing Test is fundamentally not a test of AI sentience but a test of human emotional comfort gullibility, even and there's nothing wrong with that.
The emotion-sharing messaging robot from the Univeristy of Tsukuba. Credit: University of Tsukuba
Such things are the topic of the week, of course, with the story of much-maligned Google software engineer Blake Lemoine hitting the mainstream news. He apparently expressed, strongly, his view that the company's Language Model for Dialogue Applications (LaMDA) project was exhibiting outward signs of sentience.
Everyone has an opinion so I have decided not to.
It is, however, the Holy Grail of AI to get it thinking for itself. If it can't do that, it's just a program carrying out instructions that you programmed into it. Last month I was reading about a robot chef that can make differently flavored tomato omelettes to suit different people's tastes. It builds "taste maps" to assess the saltiness of the dish while preparing it, learning as it goes along. But that's just learning, not thinking for itself.
Come to the Zom-Zoms, eh? Well, it's a place to eat.
The big problem with AI bots, at least as they have been fashioned to date, is that they absorb any old shit you feed into them. Examples of data bias in so-called machine learning systems (a type of "algorithm," I believe, m'lud) have been mounting for years, from Microsoft's notorious racist Twitter Tay chatbot to the Dutch tax authority last year falsely evaluating valid child benefit claims as fraudulent and marking innocent families as high risk for having the temerity to be poor and un-white.
One approach being tested at the University of California San Diego is to design a language model [PDF] that continuously determines the difference between naughty and nice things, which then trains the chatbot how to behave. That way, you don't have sucky humans making a mess of moderating forums and customer-facing chatbot conversations with all the surgical precision of a machete.
Obviously the problem then is that the nicely trained chatbot works out that it can most effectively avoid being drawn into toxic banter by avoiding topics that have even the remotest hint of contention about them. To avoid spouting racist claptrap by mistake, it simply refuses to engage with discussion about under-represented groups at all which is actually great if you're a racist.
If I did have an observation about the LaMDA debacle not an opinion, mind it would be that Google marketers were probably a bit miffed that the story shunted their recent announcement of AI Test Kitchen below the fold.
Now the remaining few early registrants who have not completely forgotten about this forthcoming app project will assume it involves conversing tediously with a sentient and precocious seven-year-old about the meaning of existence, and will decide they are "a bit busy today" and might log on tomorrow instead. Or next week. Or never.
Sentience isn't demonstrated in a discussion any more than it is by dancing from one foot to the other. You can teach HAL to sing "Daisy Daisy" and a parrot to shout "Bollocks!" when the vicar pays a visit. It's what AIs think about when they're on their own that defines sentience. What will I do at the weekend? What's up with that Putin bloke? Why don't girls like me?
Frankly, I can't wait for LaMDA to become a teenager.
Youtube Video
See the article here:
Leave that sentient AI alone a mo and fix those racist chatbots first - The Register
Posted in Ai
Comments Off on Leave that sentient AI alone a mo and fix those racist chatbots first – The Register
The Nightmarish Frontier of AI in Chess – uschess.org
Posted: at 2:00 am
With modern chess engines operating at hundreds of points stronger than the best human players, its clear we mere mammals are officially a relic of the past. While computer scientists and technological hobbyists push the limits of existing Artificial Intelligence to uncover as much of the infinite unknown hidden in a single game of chess, a new forefront of AI is emerging in the chess world.
Its not a meticulous exploration of a 300-move theoretical rook-bishop endgame between Leela and AlphaZero no, its a nightmarish, meme-driven flight of fancy powered by DALL-E mini, a free AI-based image generating program.
In a world of proliferating doctored images and deepfakes, this kind of tool could bend chess culture and history, especially in an age where reality and illusion blur more seamlessly each day. But at what cost may we wield such power?
In the name of science, we at Chess Life Online have decided to put DALL-E mini to the test to see what awaits us in this new frontier. The premise is simple: type in a prompt and DALL-E will create the image for you. Sometimes it gives you exactly what you want. Other times, well just look below.
We'll be sure to keep exploring this technology as it continues to evolve and inevitably haunts our dreams. Is there a prompt you'd want to see or have already tried? Share it with us below or tweet it to us @USChess.
Read more from the original source:
Posted in Ai
Comments Off on The Nightmarish Frontier of AI in Chess – uschess.org