Page 136«..1020..135136137138..150160..»

Category Archives: Ai

Artificial Intelligence Models For Sale, Another Step In The Spread Of AI Accessibility – Forbes

Posted: May 4, 2021 at 8:10 pm

Artificial Intelligence

A regular message in this column is that artificial intelligence (AI) wont spread widely until its easier to use than the requirement to have programmers who can work at the model level. That challenge wont be solved instantly, and its slowly changing. While technical knowledge is still too often required, there are ways in which development time can be shortened. One way thats been happening has been is the increased availability of pre-built models.

A few years back, a tech CEO loved to talk about the Cambrian Explosion of deep learning models, as if a lot of models meant real progress in the business world. It doesnt. What matters is the availability of useful models useful for business. In the usual meaning of the clich, the 80/20 paradigm still matters for business. While a large number of models might be of interest to academics, a much smaller subset will provide significant value to people attempting to gain insight in the real world.

In an attempt to help companies not have to recreate the wheel, ElectrifAi has built a body of AI models that can be called by applications. Those models are identified by use case, so developers can quickly narrow down options and chose to test models close to their needed use. I first became aware of the company when it issued a press release about entering the marketplace on Amazon SageMaker. They are also on the Google Cloud Marketplace.

Having worked with other companies using major sources marketplaces, I was curious. While there are still long term questions about such marketplaces, including how acquisitions of companies might impact partner applications on those marketplaces, it was important to find out more.

One key issue about buying models is the fact that privacy is increasingly important. Yet another company seeing data could be a compliance weak spot. We build and support models for our customers, said Luming Wang, CTO, ElectrifAi. However, our business model is that we dont see their data and they dont see our code. While we pre-structure and partially train models, we provide support and services that help customers tune models to their own use with their own data without us needing to see any information. Outside of those marketplaces, the company also works with systems integrators and other partners who work with their clients in implementation.

Mentioned earlier was the customers being able to choose appropriate models. That also extends to the fact that when AI is mentioned, were not only discussing deep learning. The models are built in a variety of AI techniques, including rules engines, xgboost, and neural networks (deep learning). Different domains require different techniques, said Mr. Wang. Still, rules engines can work seamlessly with neural networks for complex problems. Over one hundred rules, a neural network has advantages. In between, depending on context and data, either technique or other technologies can be used.

Given the focus on building a library of models for business, it is no surprise that very few of the models have a UI to present their own data. The models are accessed as function calls by the controlling applications. This is a key step in the evolution of AI accessibility. There needs to be some AI knowledge in order to evaluate the appropriate models to choose, but once that decision has been made, non-AI programmers only need to understand the calls and then can use the results in the wrapper application to address the business solution.

This attitude is excellent for the current state of AI in business. This presents AI not as something scary, or something requiring expensive and unique personnel, but rather as another easy to call function that can be accessed quickly by existing programmers working so solve a problem. The more programmers can access AI by calls, without having to know the details of a neural network or random forest, the faster AI will spread through the corporate technology infrastructure.

View original post here:

Artificial Intelligence Models For Sale, Another Step In The Spread Of AI Accessibility - Forbes

Posted in Ai | Comments Off on Artificial Intelligence Models For Sale, Another Step In The Spread Of AI Accessibility – Forbes

Three Ways That Organizations Are Under Utilizing AI In Their Customer Experience – Forbes

Posted: at 8:10 pm

Over the last 12 months, we have seen a surge in investment in Artificial Intelligence (AI) enabled customer self-service technologies as brands have put in place tools that have helped deflect calls away from their support teams and allow customers to self serve.

However, despite these investments, we have also seen how the phone is still an important and vital channel for many organizations regarding customer service. According to Salesforce data, daily call volume reached an all-time high last year, up 24% compared to 2019 levels. Meanwhile, Accenture found that 58% of customers prefer to speak to a support agent if they need to solve an urgent or complex issue, particularly during times of crisis.

Now, consider one of those calls.

When a customer gets through to an agent, they are not thinking about how many calls they have already answered that day, what those calls have been like and how it may have impacted them. The customer, in the moment, is only thinking about solving their particular problem.

That's all very well, you might say.

Stressed call center agent.

But, in the face of consistently high call volumes and the strains of working remotely for an extended period, reports are now starting to emerge that many contact center agents are beginning to experience a similar phenomenon to what many nurses and doctors often go through: compassion fatigue. This is the situation where, due to consistently high workloads, they become emotionally exhausted, on the verge of getting burnt out and become unable to deliver a high level of service.

That, in turn, feeds directly through to the service and experience that the patient or customer receives.

However, Dr Skyler Place, Chief Behavioural Science Officer at Cogito, believes that compassion fatigue is avoidable, and organizations should be using AI to enable and support their agents whilst on a call and, at the same time, manage their well-being and performance.

He believes that there are three areas that organizations are under utilizing AI when trying to improve their customer experience (CX).

The first is that brands should be leveraging AI technology to provide real-time feedback whilst an agent is on a call to support and empower them in the moment.

Secondly, given that many support teams are still working remotely, AI technology can replace the tradition of walking the floor and help supervisors understand how their teams are doing and what sort of coaching and support they need from call to call.

Thirdly, when you combine that data with customer outcome data and apply AI technology, you can identify insights that, as Place puts it, "can help you improve your business processes, your business outcomes and drive macro strategies beyond the call and beyond the call center."

The potential of a system that provides both in-call real-time support for agents but also intelligently understands call demand, an agents experience and in-shift call profiles such that it can optimize call matching to help achieve positive customer and employee outcomes is nothing but a good thing.

Compassion fatigue is real, and organizations need to be managing their agent's performance and well-being if they are to achieve excellent phone-based customer service.

Visit link:

Three Ways That Organizations Are Under Utilizing AI In Their Customer Experience - Forbes

Posted in Ai | Comments Off on Three Ways That Organizations Are Under Utilizing AI In Their Customer Experience – Forbes

Yet another Google AI leader has defected to Apple – Ars Technica

Posted: at 8:10 pm

Enlarge / AI researcher Samy Bengio (left) poses with his brother Yoshua Bengio (right) for a photo tied to a report from cloud-platform company Paperspace on the future of AI.

Apple has hired Samy Bengio, a prominent AI researcher who previously worked at Google. Bengio will lead "a new AI research unit" within Apple, according to a recent report in Reuters. He is just the latest in a series of prominent AI leaders and workers Apple has hired away from the search giant.

Apple uses machine learning to improve the quality of photos taken with the iPhone, surface suggestions of content and apps that users might want to use, power smart search features across its various software offerings, assist in palm rejection for users writing with the iPad's Pencil accessory, and much more.

Bengio was part of a cadre of AI professionals who left Google to protest the company's firings of its own AI ethics researchers (Margaret Mitchell and Timnit Gebru) after those researchers raised concerns about diversity and Google's approach to ethical considerations around new applications of AI and machine learning. Bengio voiced his support for Mitchell and Gebru, and he departed of his own volition after they were let go.

In his 14 years at Google, Bengio worked on AI applications like speech and image analysis, among other things. Neither Bengio nor Apple has said exactly what he will be researching in his new role in Cupertino.

See the article here:

Yet another Google AI leader has defected to Apple - Ars Technica

Posted in Ai | Comments Off on Yet another Google AI leader has defected to Apple – Ars Technica

Forbes AI 50 Selects Nines Radiology as one of the Most Promising AI Companies – MedTech Dive

Posted: at 8:10 pm

Press Releases PRESS RELEASE FROM NINES

Forbes AI 50 Selects Nines as one of the Most Promising AI Companies

Palo Alto, Calif., April 30, 2021 Forbesrecentlyannounced thatNines, Inc.has been selected one of 50 most promising private AI companies in the US and Canada. TheForbes AI 50highlights companies that are using artificial intelligence (AI) in meaningful ways and demonstrating business potential.

Being included in this list of Most Promising AI Companies is a true honor, said David Stavens, CEO of Nines, Inc. To achieve this selection is a validation of our unique approach to deliver quality, reliable care to patients in hospitals, imaging centers and radiology practices.

According to Forbes, the magazine received nearly 400 submissions from the US and Canada. Out of the 400, the finalists were whittled down to 100 companies. The judges, leading experts in AI, then selected the 50 most compelling companies. Nines is the only teleradiology practice included in the list.

The Forbes AI list features 31 companies appearing for the first time. At least 13 are valued at $100 million or less, while 13 are valued at $1 billion or more. Silicon Valley remains the hub for the AI startups, with 37 of the 50 honorees come from the San Francisco Bay area.

###

About Nines

Nines, Inc. and affiliated professional entities do business under the Nines brand. Headquartered in Silicon Valley, Nines provides a better approach to teleradiology, improving patient care with an exceptional team of clinical experts, engineers, and data scientists. These innovations focus on improving efficiencies in clinical workflows, yielding more reliable reports, turnaround times, and system uptime. Hospitals and imaging centers rely on Nines for its unmatched innovation cadence and roster of world-class radiologists. To learn more details, visitnines.com.

Read the original post:

Forbes AI 50 Selects Nines Radiology as one of the Most Promising AI Companies - MedTech Dive

Posted in Ai | Comments Off on Forbes AI 50 Selects Nines Radiology as one of the Most Promising AI Companies – MedTech Dive

John Deere and Audi Apply Intel’s AI Technology – Automation World

Posted: at 8:10 pm

While many earlier applications of artificial intelligence (AI) in manufacturing have focused on data analytics and identifying product and component defects with machine vision, use of the technology is already expanding beyond such applications in the real world. Two good examples of this can be seen at John Deere and Audi, where Intels AI technology is being used to improve welding processes.

Christine Boles, vice president of the Internet of Things Group and general manager of the Industrial Solutions Division at Intel.Explaining how Intel got involved in addressing industrial welding applications, Christine Boles, vice president of the Internet of Things Group and general manager of the Industrial Solutions Division at Intel, said, Intel and Deere first connected at an industry conference to discuss some of the ways technology could be used to solve manufacturing challenges. Arc welding defect detection came up as an industry-wide challenge that Intel decided to take on.

She added that, like with Deere, Intel met with Audi at a conference years ago and the first project we worked on was spot welding quality detection in Audis Neckarsulm plant. Boles added that this initial project with Audi has since expanded into other areas of collaboration around edge analytics and machine learning.

Gas metal arc welding (GMAW) is used at Deeres 52 factories around the world to weld mild- to high-strength steel to create machines and products. Across these factories, hundreds of robotic arms consume millions of pounds of weld wire annually.

The specific welding issue Deere is looking to address with Intels AI technology is porositycavities in the weld metal caused by trapped gas bubbles as the weld cools. These cavities weaken the weld strength.

Its critical to find porosity defects early in the manufacturing process because, if these flaws are found later, re-work or even scrapping of full assemblies is often required.

AdLinks EOS-i6000-M Series AI GigE Vision Systems for the Edge featuring Intel Movidius Myriad VPU.Intel and Deere worked collaboratively to develop an integrated, end-to-end system of hardware and software that could generate insights in real-time at the edge. Using a neural network-based inference engine, the system logs defects in real-time and automatically stops the welding process when defects are found to correct the issue in real time.

Combining an industrial grade ADLink Machine Vision Platform and a MeltTools welding camera, the edge system at Deere is powered by Intel Core i7 processors and uses Intel Movidius VPUs (vision processing units) and the Intel Distribution of OpenVINO toolkit.

Deere is leveraging AI and machine vision to solve a common challenge with robotic welding, said Boles. By leveraging Intel technology and smart infrastructure in their factories, Deere is positioning themselves to capitalize not only on this welding solution, but potentially others that emerge as part of their broader Industry 4.0 transformation.

A key aspect of this goal involves Audis recognition that creating customized hardware and software to handle individual use cases is not preferrable. Instead, the company focuses on developing scalable and flexible platforms that allow them to more broadly apply advanced digital capabilities such as data analytics, machine learning, and edge computing.

MeltToolss Sync is a GigE based arc view camera.With that perspective in mind, Audi worked with Intel and Nebbiolo Technologies (a supplier of fog/edge computing technologies) on a proof-of-concept project to improve quality control for the welds on vehicles produced at its Neckarsulm, Germany, assembly plant. Approximately 1,000 vehicles are produced every day of production at the Neckarsulm factory, with an average of 5,000 welds in each car. That translates to more than 5 million welds each day.

Nine hundred of the 2,500 autonomous robots on its production line at this facility carry welding guns to do spot welds that hold pieces of metal together. To ensure the quality of its welds, Audi performs manual quality control inspections. Because its impossible to manually inspect 1,000 cars every day, Audi uses the industrys standard sampling method.

To do this, Audi pulls one car off the line each day and 18 engineers with clipboards use ultrasound probes to test the welding spots and record the quality of every spot, says Rita Wouhaybi, principal engineer for the Internet of Things Group in the Industrial Solutions Division at Intel and lead architect for Intels Industrial Edge Insights software.

To cost effectively test the welds on the other 999 vehicles produced each day, Audi worked with Intel to create algorithms using Intels Industrial Edge Insights software and the Nebbiolo edge platform for streaming analytics. The machine-learning algorithm developed by Intels data scientists for this application was trained for accuracy by comparing the predictions it generated to actual inspection data provided by Audi.

The machine learning model uses data generated by the welding controllers, rather than the robot controllers, so that electric voltage and current curves during the welding operation can be tracked. Other weld data used includes configuration of the welds, the types of metal, and the health of the electrodes.

A dashboard lets Audi employees visualize the data, and the system alerts technicians whenever it detects a faulty weld or a potential change in the configuration that could minimize or eliminate the faults altogether.

Overview of artificial intelligence at the edge in action at Audi.Inline inspection of 5,000 welds per car and inferring the results of each weld within 18msec highlights the scale and real-time analytics response Nebbiolos edge platform brings to manufacturing, says Pankaj Bhagra, software architect at Nebbiolo. Our software stack provides the centralized management for distributed edge computing clusters, data ingestion from heterogeneous sources, data cleansing, secure data management and onboarding of AI/ML models, which allowed Audi and Intel data science teams to continuously iterate the machine learning models until they achieved the desired level of accuracy.

According to Intel, the result is a scalable, flexible platform that Audi can use to improve quality control for spot welding and as the foundation for other use cases involving robots and controllers such as riveting, gluing and painting.

Intel was the project leader, said Mathias Mayer of the Data Driven Production Tech Hub at the Audi Neckarsulm site. They have production experience as well as knowing how to set up a system that does statistical process control. This is completely new to us. Intel taught us how to understand the data, how to use the algorithms to analyze data at the edge, and how we can work with data in the future to improve our operations on the factory floor.

Henning Loser, senior manager of the Audi Production Lab, agrees: This solution is like a blueprint for future solutions. We have a lot of technologies in the factory, and this solution is a model we can use to create quality-inspection solutions for those other technologies so that we dont have to rely on manual inspections.

Moving from manual inspections to an automated, data-driven process has allowed Audi to increase the scope and accuracy of its quality-control processes, said Loser. Other benefits include a 30%- 50% reduction in labor costs at the Neckarsulm factory.

Read the original here:

John Deere and Audi Apply Intel's AI Technology - Automation World

Posted in Ai | Comments Off on John Deere and Audi Apply Intel’s AI Technology – Automation World

Cover Story: Now its AI that is eating the world – Which-50

Posted: at 8:10 pm

Marc Andreessen famously observed, Software is eating the world, and according to Jamila Gordon, CEO and founder of Lumachain, so is Artificial Intelligence (AI).

AI and Machine Learning (ML) are well and truly ingrained in every industry. From healthcare, to agribusiness, to food manufacturing, AI is improving efficiencies and productivity, while also generating common challenges.

To better understand the use-cases and impediments of AI across different verticals we hosted a panel with three AI CEOs who, coincidentally were all award winners in this years Women in AI Awards. They included;

AI Use-Cases

Presagens first product, Life Whisperer uses cloud-based AI in the embryo selection process during IVF. The AI is trained using more than 20 thousand 2D embryo images to better identify the most viable embryos. According to Perugini, AI in womens health products and in the fertility sector is not only bringing about efficiencies, but is also advancing accessibility to healthcare products and services.

AI in our space, in the fertility sector, is really advancing patient outcomes. Its improving efficiency and standardisation within the clinic environment, its bringing global technology into clinics that wouldnt otherwise be able to access or afford it. And its bringing affordable and accessible health care to patients around the world, Perugini says.

Both Bitwise Agronomy and Lumachain are deploying computer vision, camera based AI, which works to mimic human vision. In agribusiness, Turner says that AI in the sector is really starting to take off across all sorts of horticultural livestock and all facets of agriculture.

Bitwise Agronomy claims that its AI solution delivers better results to farmers by using accurate data to improve yield and reduce costs.

Farmers can use GoPro cameras to capture footage during their work and upload this to the Bitwise Agronomy Platform. This then provides insights and uses historical data to make predictions around processes including crop performance, harvesting dates, climate impacts, water stress levels, sprays and irrigation systems.

Lumachains use of computer vision based AI is deployed to track the safety and security of the food manufacturing supply chain.

According to Gordon, Lumachain provides an end to end set of modules for the global food supply chain, which shows where the food has come from, where it has

traveled, what were the conditions, as well as ensuring that the products were safely, humanely and efficiently produced, while also ensuring the employees safety.

Challenges and Impediments

When it comes to the impediments of AI, the panellists across their varied industries, were in agreement that they are facing the same challenges.

According to Perugini, I think there are some common challenges with respect to impediments to AI, mainly around data access and quality and quantity and type of data. And I think the world is kind of shifting their thinking around this. It used to be that everyone was trying to get the largest data sets. I dont think its like that anymore.

I think theres a recognition that you need the right data sets. You need globally scalable data sets. Those data sets need to be representative broadly of the domain in which youre using AI to solve a particular problem.

When it comes to AI in healthcare, Perugini highlights the importance of broad data sets across multiple clinical environments with a wide range of patient demographics. Should these data sets not be wide enough, she says, then the AI will need retraining and rebuilding, leading to higher end-user costs.

So everything that we do as a company is around solving that scalability challenge and getting the right data, which is globally diverse so that we can deliver these products

at scale and low cost, she says.

In agribusiness, Turner talks to the same challenge using different language. She says, Its about how we curate our data sets. This curation involves varied regions and growing types that are broad and deep enough to ensure that the AI is trained to multiple variables.

Ethics and AI

One of the key challenges that is facing AI globally is the rise of unethical AI. Rob Sibo, data and analytics Senior Director at the technology consulting firm Slalom Australia told Which-50 that the cognitive biases in the human thinking process are replicated in AI.

Humans create the machine learning algorithms at the moment and a lot of times we propagate the same biases when we design the algorithms or when we collect the data that trains the algorithms, says Sibo.

Theres a lot of biases that get replicated into the machine learning models, which is what concerns me as well, because the model might be perfectly fine, the data might be fine, but the way we frame the problem and the objective is completely skewed. So you just apply a perfectly good model to a skewed problem.

To mitigate the rise of unethical AI, The Australian Governments Department of Industry, Science, Energy and Resources developed an AI Ethics Framework, which it claims helps to achieve better outcomes, reduce risk and encourage good governance.

According to Perugini, who helped to develop the framework, it is reminiscent of Australias strict regulatory framework for healthcare.

I think its a very structured and strong way to manage risk around how many people are you going to impact with this AI, what is the outcomes of kind of getting it wrong and how do we therefore mitigate those risks or ensure that the right data has been utilised or that testing has been done to protect the consumers that we are serving, she says.

The Future Of AI

AI will be in every industry in some way, says Gordon, and AI will impact every aspect of our lives.

AI is set to become even more integrated into our lives than it already is, and according to Turner, so much so that we wont even know it is there.

Looking to the future, Gordon sees human collaboration with AI to be the next big step where AI can play a supervising role, as well as automating manual tasks is set to increase.

The integration between AI and robotics, according to Turner, will be one of the greatest drivers of efficiency where the AI can act as the brain and the vision to the robots physical counterpart.

I think quantum computing is going to help accelerate the growth so we can eventually get to General AI, which is a fair way off where your AI can do multiple things at once,

More these kind of futuristic AI robots that you hear of. I think well get there, but were a fair way off from that.

Read more:

Cover Story: Now its AI that is eating the world - Which-50

Posted in Ai | Comments Off on Cover Story: Now its AI that is eating the world – Which-50

New AI Regulations Are Coming. Is Your Organization Ready? – Harvard Business Review

Posted: at 8:10 pm

In recent weeks, government bodies including U.S. financial regulators, the U.S. Federal Trade Commission, and the European Commission have announced guidelines or proposals for regulating artificial intelligence. Clearly, the regulation of AI is rapidly evolving. But rather than wait for more clarity on what laws and regulations will be implemented, companies can take actions now to prepare. Thats because there are three trends emerging from governments recent moves.

Over the last few weeks, regulators and lawmakers around the world have made one thing clear: New laws will soon shape how companies use artificial intelligence (AI). In late March, the five largest federal financial regulators in the United States released a request for information on how banks use AI, signaling that new guidance is coming for the finance sector. Just a few weeks after that, the U.S. Federal Trade Commission (FTC) released an uncharacteristically bold set of guidelines on truth, fairness, and equity in AIdefining unfairness, and therefore the illegal use of AI, broadly as any act that causes more harm than good.

The European Commission followed suit on April 21 released its own proposal for the regulation of AI, which includes fines of up to 6% of a companys annual revenues for noncompliance fines that are higher than the historic penalties of up to 4% of global turnover that can be levied under the General Data Protection Regulation(GDPR).

For companies adopting AI, the dilemma is clear: On the one hand, evolving regulatory frameworks on AI will significantly impact their ability to use the technology; on the other, with new laws and proposals still evolving, it can seem like its not yet clear what companies can and should do. The good news, however, is that three central trends unite nearly all current and proposed laws on AI, which means that there are concrete actions companies can undertake right now to ensure their systems dont run afoul of any existing and future laws and regulations.

The first is the requirement to conduct assessments of AI risks and to document how such risks have been minimized (and ideally, resolved). A host of regulatory frameworks refer to these types of risk assessments as algorithmic impact assessments also sometimes called IA for AI which have become increasingly popular across a range of AI and data protection frameworks.

Indeed, some of these types of requirements are already in place, such as Virginias Consumer Data Protection Act signed into law last month, it requires assessments for certain types of high-risk algorithms. In the EU, the GDPR currently requires similar impact assessments for high-risk processing of personal data. (The UKs Information Commissioners Office keeps its own plain language guidance on how to conduct impact assessments on its website).

Unsurprisingly, impact assessments also form a central part of the EUs new proposal on AI regulation, which requires an eight-part technical document for high-risk AI systems that outlines the foreseeable unintended outcomes and sources of risks of each AI system, along with a risk-management plan designed to address such risks. The EU proposal should be familiar to U.S. lawmakers it aligns with the impact assessments required in a bill proposed in 2019 in both chambers of Congress called the Algorithmic Accountability Act. Although the bill languished on both floors, the proposal would have mandated similar reviews of the costs and benefits of AI systems related to AI risks. That bill that continues to enjoy broad support in both the research and policy communities to this day, and Senator Ron Wyden (D-Oregon), one of its cosponsors, reportedly plans to reintroduce the bill in the coming months.

While the specific requirements for impact assessments differ across these frameworks, all such assessments have the two-part structure in common: mandating a clear description of the risks generated by each AI system and clear descriptions of how each individual risk has been addressed. Ensuring that AI documentation exists and captures each requirement for AI systems is a clear way to ensure compliance with new and evolving laws.

The second trend is accountability and independence, which, at a high level, requires both that each AI system be tested for risks and that the data scientists, lawyers, and others evaluating the AI have different incentives than those of the frontline data scientists. In some cases, this simply means that AI be tested and validated by different technical personnel than those who originally developed it; in other cases (especially higher-risk systems), organizations may seek to hire outside experts to be involved in these assessments to demonstrate full accountability and independence. (Full disclosure: bnh.ai, the law firm that I run, is frequently asked to perform this role.) Either way, ensuring that clear processes create independence between the developers and those evaluating the systems for risk is a central component of nearly all new regulatory frameworks on AI.

The FTC has been vocal on exactly this point for years. In its April 19 guidelines, it recommended that companies embrace accountability and independence and commended the use of transparency frameworks, independent standards, independent audits, and opening data or source code to outside inspection. (This recommendation echoed similar points on accountability the agency made publicly in April of last year.)

The last trend is the need for continuous review of AI systems, even after impact assessments and independent reviews have taken place. This makes sense. Because AI systems are brittle and subject to high rates of failure, AI risks inevitably grow and change over time meaning that AI risks are never fully mitigated in practice at a single point in time.

For this reason, lawmakers and regulators alike are sending the message that risk management is a continual process. In the eight-part documentation template for AI systems in the new EU proposal, an entire section is devoted to describing the system in place to evaluate the AI system performance in the post-market phase in other words, how the AI will be continuously monitored once its deployed.

For companies adopting AI, this means that auditing and review of AI should occur regularly, ideally in the context of a structured process that ensures the highest-risk deployments are monitored the most thoroughly. Including details about this process in documentation who performs the review, on what timeline, and the parties responsible is a central aspect of complying with these new regulations.

Will regulators converge on other approaches to managing AI risks outside of these three trends? Surely.

There are a host of ways to regulate AI systems from explainability requirements for complex algorithms to strict limitations for how certain AI systems can be deployed (e.g., outright banning certain use cases such as the bans on facial recognition that have been proposed in various jurisdictions throughout the world).

Indeed, lawmakers and regulators have still not even arrived at a broad consensus on what AI is itself, a clear prerequisite for developing a common standard to govern AI. Some definitions, for example, are tailored so narrowly that they only apply to sophisticated uses of machine learning, which are relatively new to the commercial world; other definitions (such as the one as in the recent EU proposal) appear to cover nearly any software system involved in decision-making, which would apply to systems that have been in place for decades. Diverging definitions of artificial intelligence are simply one among many signs that we are still in the early stages of global efforts to regulate AI.

But even in these early days, the ways that governments are approaching the issue of AI risk have clear commonalities, meaning that the standards for regulating AI are already becoming clear. So organizations adopting AI right now and those seeking to ensure their existing AI remains compliant need not wait to start preparing.

View original post here:

New AI Regulations Are Coming. Is Your Organization Ready? - Harvard Business Review

Posted in Ai | Comments Off on New AI Regulations Are Coming. Is Your Organization Ready? – Harvard Business Review

Precision AI raises $20 million to reduce the chemical footprint of agriculture – PRNewswire

Posted: at 8:10 pm

The financing will support the advancement of a disruptive precision farming platform that deploys swarms of artificially intelligent drones to dramatically reduce herbicide use in row crop agriculture.

Precision AI's drone-based computer vision technology enables surgically precise application of herbicide to individual weeds in row crop farming. By spraying only weeds and avoiding the crop, yields can be maintained at a fraction of the chemical cost. Ultimately, the company's vision is to deploy hives of intelligent drones that will automate the crop protection process throughout the entire growing season, optimizing every square inch of farmland on a per-plant basis.

"Farms of the future must be sustainable and produce healthier foods," said Daniel McCann, CEO and founder of Precision AI. "Using artificial intelligence to target individual weeds is a quantum leap in efficiency and sustainability over today's practices of indiscriminate broadcast application of herbicide."

Herbicide spraying is one of the least efficient agricultural activities, with over 80 percent wasted on bare ground and another 15 percent falling on the crop. While competitors have focused on high-value, low acreage crops, Precision AI's disruptive approach to drone swarming allows for application on large acreage crops at a much lower cost than traditional large farming machinery. It holds the promise to reduce pesticide use by up to 95% while maintaining crop yield and saving farmers up to $52 per acre per growing season. "The cost savings are massive," said McCann. "And the affordable unit economics of drones makes the technology accessible to even the smallest farm".

"We were immediately struck by Precision AI's unique combination of drone technology with precise chemical application. Not only can it minimize toxic runoff to protect waterways and downstream ecology, but also reduce farmers' operating costs and increase their revenue with a zero-chemical residue label," said Laurie Menoud, Partner at At One Ventures and member of the Board of Directors.

"BDC Capital is excited to back an ambitious entrepreneur with a great syndicate of investment partners. Precision AI's technology, by applying Artificial Intelligence technologies in the field, will reduce reliance on crop inputs and enable benefits to farmers, the broader food supply chain, and the environment. We are hopeful that Precision AI can be among the next generation of Agtech solutions that change the industry." said Joe Regan, Managing Partner, Industrial Innovation Venture Fund, BDC Capital, who will be joining the Board of Directors.

The platform also increases producer competitiveness in the global market with integrated food supply chain traceability and proof of sustainable farming practices.

"Autonomous, precision spraying is the future of modern agriculture, and Precision AI's best-in-class technology stack and deep management expertise have the potential to accelerate the development of this industry in exciting ways," said Kevin Lockett, partner at U.S.-based Fulcrum Global Capital. "With an increasingly informed consuming public demanding greater transparency into the food it eats, we are excited to partner with Precision AI and the other co-investors in commercializing multiple ways to reduce the use of traditional chemicals within our food system while increasing sustainability and farmer profitability."

"Precision AI's technology is revolutionizing the agriculture industry. Its innovative application of precision spraying not only prevents the overuse of herbicides but reduces operating costs for farmers and delivers improved and sustainable crop protection practices. Precision AI is a shining example of Canadian cleantech innovation and SDTC is proud to invest in its transformative technology." said Leah Lawrence, President and CEO of Sustainable Development Technology Canada.

About Precision AIFounded in 2018, Precision AI is at the forefront of the autonomous farming revolution. Using computer vision and robotics, the company provides fully autonomous spraying and crop protection solutions for small to large farms and farm machinery manufacturers.www.precision.ai

SOURCE Precision AI

http://www.precision.ai

Go here to read the rest:

Precision AI raises $20 million to reduce the chemical footprint of agriculture - PRNewswire

Posted in Ai | Comments Off on Precision AI raises $20 million to reduce the chemical footprint of agriculture – PRNewswire

AI bias is an ongoing problem, but there’s hope for a minimally biased future – TechRepublic

Posted: at 8:10 pm

Removing bias from AI is nearly impossible, but one expert sees a future with potentially bias-free decisions made by machines.

TechRepublic's Karen Roby spoke with Mohan Mahadevan, VP of research for Onfido, an ID and verification software company, about bias in artificial intelligence. The following is an edited transcript of their conversation.

Karen Roby: We talk a lot about AI and the misconceptions involved here. What is the biggest misconception? Do you think it's that people just think that it should be perfect, all of the time?

SEE: Hiring Kit: Video Game Programmer (TechRepublic Premium)

Mohan Mahadevan: Yeah, certainly. I think whenever we try to replace any human activity with machines, the expectation from us is that it's perfect. And we want to very much focus on finding problems, every little nitpicky problem that the machine may have.

Karen Roby: All right, Mohan. And if you could just break down for us, why does bias exist in AI?

Mohan Mahadevan: AI is driven primarily by data. AI refers to the process by which machines learn how to do certain things, driven by data. Whenever you do that, you have a particular dataset. And any dataset, by definition, is biased, because there is no such thing as a complete dataset, right? And so you're seeing a part of the world, and from that part of the world, you're trying to understand what the whole is like. And you're trying to model behavior on the whole. Whenever you try to do that, it is a difficult job. And in order to do that difficult job, you have to delve into the details of all the aspects, so that you can try to reconstruct the whole as best as you can.

Karen Roby: Mohan, you've been studying and researching AI for many years now. Talk a little bit about your role, there at Onfido, and what your job entails.

Mohan Mahadevan: Onfido is a company that takes a new approach to digital identity verification. So what we do is we connect the physical identity to a digital identity, thereby enabling you to prove who you are, to any service or product that you wish to access. It could be opening a bank account, or it could be renting a car, or opening an account and buying cryptocurrency, in these days. What I do, particularly, is that I run the computer vision and the AI algorithms that power this digital identity verification.

SEE: Digital transformation: A CXO's guide (free PDF) (TechRepublic)

Karen Roby: When we talk about fixing the problem, Mohan, "how" is a very complex issue when we talk about bias. How do we fix it? What type of intervention is needed at different levels?

Mohan Mahadevan: I'll refer back to my earlier point, just for a minute. So what we covered there was that, any dataset by itself is incomplete, which means it's biased in some form. And then, when we build algorithms, we then exacerbate that problem by adding more bias into the situation. Those are two things first that we need to really pay close attention to and handle well. Then what happens is, the researchers that formulate these problems, they bring in their human bias into the problem. That could either fix the problem or make it worse, depending on the motivation of the researchers and how focused they are on solving this particular problem. Lastly, let us assume that all of these things worked out really well. OK? The researchers were unbiased, the dataset completion problem was solved.

The algorithms were modeled correctly. Then you have this perfect AI system that is currently unbiased or minimally biased. There's no such thing as unbiased. It's minimally biased. Then, you take it and apply it in the real world. You take it to the real world. And the real world data is always going to drift and move and vary. So, you have to pay close attention to monitor these systems when they're deployed in the real world, to see that they remain minimally biased. And you have to take corrective actions as well, to correct for this bias as it happens in the real world.

SEE: Hyperautomation takes RPA to the next level, allowing workers to do more important tasks (TechRepublic)

Karen Roby: I think people hear a lot about bias and they think they know what that means. But what does it really mean, when bias exists in an AI?

Mohan Mahadevan: In order to understand the consequences, let's look at all the stakeholders in the equation. You have a company that builds a product based on AI. And then you have a consumer that consumes that product, which is driven by AI. So let's look at both sides, and the consequences are very different on both sides.

On the human side, if I get a loan rejected, it's terrible for me. Right? Even if, for all the Indian people ... So I'm from India. And so for all the Indian people, if a AI system was proven to be fair, but I get my loan rejected, I don't care that it's fair for all Indian people. Right? It affects me very personally and very deeply. So, as far as the individual consumer goes, the individual fairness is a very critical component.

As far as the companies go, and the regulators and the governments go, they want to make sure that no company is systematically excluding any group. So they don't care so much about individual fairness, they look at group fairness. People tend to think of group fairness and individual fairness as separate things. If you just solve the group, you're OK. But the reality is, when you look at it from the perspective of the stakeholders, they're very different consequences.

Karen Roby: We'll flip the script a little bit here, Mohan. In terms of the positives with AI, what excites you the most?

SEE: 9 questions to ask when auditing your AI systems (TechRepublic)

Mohan Mahadevan: There are just so many things that excite me. But in regards to bias itself, I'll tell you. Whenever a human being is making a decision on any kind of thing, whether it be a loan, whether it be an admission or whatever, there's always going to be a conscious and unconscious bias, within each human being. And so, if you think of an AI that looks at the behavior of a large number of human beings and explicitly excludes the bias from all of them, the possibility for a machine to be truly or very minimally biased is very high. And this is exciting, to think that we might live in a world where machines actually make decisions that are minimally biased.

Karen Roby: It definitely impacts us all in one way or another, Mohan. Wrapping up here, there's a lot of people that are scared of AI. Anytime you take people, humans, out of the equation, it's a little bit scary.

Mohan Mahadevan: Yeah. I think we should all be scared. I think this is not something that we should take lightly. And we should ask ourselves the hard questions, as to what consequences there can be of proliferating technology for the sake of proliferating technology. So, it's a mixed bag, I wish I had a simple answer for you, to say, "This is the answer." But, overall, if we look at machines like the washing machine, or our cars, or our little Roombas that clean our apartments and homes, there's a lot of really nice things that come out of even AI-based technologies today.

Those are examples of what we think of as old-school technologies, that actually use a lot of AI today. Your Roomba, for example, uses a lot of AI today. So it certainly makes our life a lot easier. The convenience of opening a bank account from the comfort of your home, in these pandemic times, oh, that's nice. AI is able to enable that. So I think there's a lot of reason to be excited about AI, the positive aspects of AI.

The scary parts I think come from several different aspects. One is bias-related. When an AI system is trained poorly, it can generate all kinds of systematic and random biases. That can cause detrimental effects on a per-person and on a group level. So we need to protect ourselves against those kinds of biases. But in addition to that, when it is indiscriminately used, AI can also lead to poor behaviors on the part of humans. So, at the end of the day, it's not the machine that's creating a problem, it's how we react to the machine's behavior that creates bigger problems, I think.

Both of those two areas are important. It's not only the machines giving us good things, but also struggling with bias when the humans don't build them right. Then, when the humans use them indiscriminately and in the wrong way, they can create other problems as well.

Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays

TechRepublic's Karen Roby spoke with Mohan Mahadevan, VP of research for Onfido, an ID and verification software company, about bias in artificial intelligence.

Image: Mackenzie Burke

Read more:

AI bias is an ongoing problem, but there's hope for a minimally biased future - TechRepublic

Posted in Ai | Comments Off on AI bias is an ongoing problem, but there’s hope for a minimally biased future – TechRepublic

guardDog.ai Brings Solution for Securing Networks and Devices in Edge Territory to Latin America via Distribution Agreement with Clean Technologies IP…

Posted: at 8:10 pm

Ongoing Global Expansion of Access to Protection from Threats and Vulnerabilities Not Addressed by Traditional Network and Device Management Solutions

SALT LAKE CITY(BUSINESS WIRE)#BeyondVPNGuard Dog Solutions, Inc., dba guardDog.ai, a rapidly expanding leader in cyber security solutions for consumers and businesses, today announced a continued step in its growth through a distribution agreement with Latin America-based Clean Technologies IP LLC.

Under the terms of the agreement, Clean Technologies will lead Latin American distribution for guardDog.ai. guardDog PCS (Protective Cloud Services) and guardDog Fido are a cloud-based software service with companion network security device. This easy-to-install cyber security solution offers painless threat detection, automated countermeasures, and assessments of vulnerabilities it finds on your network and attached devices.

In Wi-Fi and wired networks, guardDog.ai protects and warns against threats outside the perimeter of the network or on attached devices that other solutions often dont see in an area the company calls edge territory. Devices of every kind are inherently vulnerable to the networks they join. guardDog.ai employs patent-pending artificial intelligence to recognize, expose, and help prevent cyber security threats before they become a problem.

The guardDog.ai solution is especially important in countries with government agencies, financial Institutions, and large manufacturing corporations that are heavily reliant on Wi-Fi and have limited access to wired infrastructure, as they are especially vulnerable to cyber attacks. Likewise, businesses (and consumers) are struggling to manage the security risks that result from the explosion of workers in remote working environments, and a shortfall of talent or tools to secure them. guardDog.ai addresses these challenges.

Said guardDog.ai CEO Peter Bookman, Cyber threats have exploded globally, the security landscape has changed, and solutions havent kept up. Covid-19 has accelerated trends like remote working, and with it have vastly expanded the attack surface. Clean Technologies understands our vision for changing the approach to the problem in order to get better results, and we look forward to working with them to bring our solution to Latin America.

Peter Zimeri, CEO of Clean Technologies IP LLC stated, We are very pleased to partner with guardDog to deliver effective cybersecurity solutions to Wi-Fi and wired networks throughout Latin America. As cybercrimes rise in government and financial institutions, we feel we are providing a tremendous value. We are protecting these networks from ransomware, phishing, identity theft, hacking, scamming, computer viruses and malware, botnets and DDoS attacks, all of which require new approaches and the protection guardDog delivers.

About guardDog.ai

Headquartered in Salt Lake City, Utah, guardDog.ai has developed a cloud-based software service with a companion device that work together to simplify network security. The solution exposes invisible threats on networks, and the devices attached to them, with patented technology to address and prevent cybersecurity threats before they compromise network environments. Every business, government, healthcare institution, home consumer, or other organization, are grappling to find security solutions that are adapting to this changing world. guardDog.ai is pioneering new innovations designed to meet these challenges.

Safe Harbor Statement

This press release contains forward looking statements of Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Exchange Act. Forward looking statements are not a guarantee of future performance and results, and will not be accurate indications of the times, or by, which such performance will be achieved.

For more information visit guardDog.ai and explore edge territory analytics at Live Map.

#EdgeTerritory #Cybersecurity #ProtectiveCloudServices #BeyondVPN

Contacts

Sales Contact:

sales@guarddog.ai833-248-2733

Press Contact:

Snapp Conner

Cheryl Conner

801-806-0150

info@snappconner.com

Read the original here:

guardDog.ai Brings Solution for Securing Networks and Devices in Edge Territory to Latin America via Distribution Agreement with Clean Technologies IP...

Posted in Ai | Comments Off on guardDog.ai Brings Solution for Securing Networks and Devices in Edge Territory to Latin America via Distribution Agreement with Clean Technologies IP…

Page 136«..1020..135136137138..150160..»