Page 87«..1020..86878889..100110..»

Category Archives: Artificial Intelligence

Artificial intelligence taking over DevOps functions, survey confirms – ZDNet

Posted: May 18, 2021 at 4:23 am

The pace of software releases has only accelerated, and DevOps is the reason things have sped up. Now, artificial intelligence and machine learning are also starting to play a role in this acceleration of code releases.

That's the word from GitLab's latest surveyof 4,300 developers and managers, which finds some enterprises are releasing code ten times faster than in previous surveys. Almost all respondents, 84%, say they're releasing code faster than before, and 57% said code is being released twice as fast, from 35% a year ago. Close to one in five, 19%, say their code goes out the door ten times faster.

Tellingly, 75% are using AI/ML or bots to test and review their code before release, up from 41% just one year ago. Another 25% say they now have full test automation, up from 13%.

About 21% of survey respondents say the pace of releases has accelerated with the addition of source code management to their DevOps practice (up from 15% last year), the survey's authors add. Another 18% added CI and 13% added CD. Nearly 12% say adding a DevOps platform has sped up the process, while just over 10% have added automated testing.

Developers' roles are shifting toward the operations side as well, the survey shows. Developers are taking on test and ops tasks, especially around cloud, infrastructure and security. At least 38% of developers said they now define or create the infrastructure their app runs on. About 13% monitor and respond to that infrastructure. At least 26% of developers said they instrument the code they've written for production monitoring -- up from just 18% last year.

Fully 43% of our survey respondents have been doing DevOps for between three and five years -- "that's the sweet spot where they've known success and are well-seasoned," the survey's authors point out. In addition, they add, "this was also the year where practitioners skipped incremental improvements and reached for the big guns: SCM, CI/CD, test automation, and a DevOps platform."

Industry leaders concur that DevOps has significantly boosted enterprise software delivery to new levels, but caution that it still tends to be seen as an IT activity, versus a broader enterprise initiative. "Just like any agile framework, DevOps requires buy-in," says Emma Gautrey, manager of development operations at Aptum. "If the development and operational teams are getting along working in harmony that is terrific, but it cannot amount to much if the culture stops at the metaphorical IT basement door. Without the backing of the whole of the business, continuous improvement will be confined to the internal workings of a single group."

DevOps is a commitment to quick development/deployment cycles, "enhanced by, among other things, an enhanced technical toolset -- source code management, CI/CD, orchestration," says Matthew Tiani, executive vice president at iTech AG. But it takes more than toolsets, he adds. Successful DevOps also incorporates "a compatible development methodology such as agile and scrum, and an organization commitment to foster and encourage collaboration between development and operational staff."

Then organizations aspects of DevOps tend to be more difficult, Tiani adds. "Wider adoption of DevOps within the IT services space is common because the IT process improvement goal is more intimately tied to the overall organizational goals. Larger, more established companies may find it hard to implement policies and procedures where a complex organizational structure impedes or even discourages collaboration. In order to effectively implement a DevOps program, an organization must be willing to make the financial and human investments necessary for maintaining a quick-release schedule."

What's missing from many current DevOps efforts is "the understanding and shared ownership of committing to DevOps," says Gautrey. "Speaking to the wider community, there is often a sense that the tools are the key, and that once in place a state of enlightenment is achieved. That sentiment is little different from the early days of the internet, where people would create their website once and think 'that's it, I have web presence.'"

That's where the organization as a whole needs to be engaged, and this comes to fruition "with build pipelines that turn red the moment an automated test fails, and behavioral-driven development clearly demonstrating the intentions of the software," says Gautrey. "With DevOps, there is a danger in losing interaction with individuals over the pursuit of tools and processes. Nothing is more tempting than to apply a blanket ruling over situations because it makes the automation processes consistent and therefore easier to manage. Responding to change means more than how quickly you can change 10 servers at once. Customer collaboration is key."

View original post here:

Artificial intelligence taking over DevOps functions, survey confirms - ZDNet

Posted in Artificial Intelligence | Comments Off on Artificial intelligence taking over DevOps functions, survey confirms – ZDNet

Why new EU rules around artificial intelligence are vital to the development of the sector – ComputerWeekly.com

Posted: at 4:23 am

European Union (EU) lawmakers have introduced new rules that will shape how companies use artificial intelligence (AI). The rules are the first of their kind to introduce regulation to the sector, and the EUs approach is unique in the world.

In the US, tech firms are largely left to themselves, while in China, AI innovation is often government-led and used regularly to monitor citizens without too much hindrance from regulators. The EU bloc, however, is taking an approach that aims to maximise the potential of AI while maintaining privacy laws.

There are new regulations around cases that are perceived as endangering peoples safety or fundamental rights, such as AI-enabled behaviour manipulation techniques. There are also prohibitions on how law enforcement can use biometric surveillance in public places (with broad exemptions). Some high-risk cases also face specific regulatory requirements, before and after entering the market.

Transparency requirements have also been introduced for certain AI use cases, such as chatbots and deep fakes, where EU lawmakers believe risk can be mitigated if users are made aware that they are interacting with something that is not human.

Companies that do not comply with these new rules face fines of up to 6% of their annual revenue higher penalties than those that can be levied under the General Data Protection Regulation (GDPR).

Like many other firms in the AI sector, we are in favour of this type of legislation. For far too long, there have been too many cases where biased datasets have been used by companies to develop AI that discriminates against the society it is meant to serve. A good example was when Goldman and Apple partnered to launch a new credit card. The historical datasets used to run the automated approval process for the cards were biased and favoured male applicants over women, shutting out millions of prospective users.

These negative outcomes are a wake-up call for companies, and proof that they must seriously consider algorithm interpretability and testing. New, robust legislation puts a renewed sense of responsibility on those developing and implementing AI to be transparent and call out biases in datasets. Without legislation, companies have no incentive to put in the extra resources required to overcome such biases.

We believe that legislation can enforce ethics and help to reduce the disturbing amount of bias in AI especially in the world of work. Some AI recruitment tools have been found to discriminate against women because they lean towards favouring employees similar to their existing workforce, who are men.

And it does not stop at recruitment. As ProPublica unveiled a few years ago, a criminal justice algorithm deployed in Broward County, Florida, falsely labelled African-American defendants as high risk at nearly twice the rate that it mislabeled defendants who were white.

Beyond the problematic issues of bias against women and minorities, there is also the need to develop collectively agreed legal frameworks around explainable AI. This describes humans being able to understand and articulate how an AI system made a decision and track outcomes back to the origin of the decision. Explainable AI is crucial in all industries, but particularly in healthcare, manufacturing and insurance.

An app might get it wrong when recommending a movie or song without many consequences. But when it comes to more serious applications, such as a suggested dental treatment or a rejected application for an insurance claim, it is crucial to have an objective system for developing more understanding around explainable AI. If there are no rules around tracing how an AI system came to a decision, it is difficult to pinpoint where accountability lies as usage becomes more ubiquitous.

The public is arguably growing more suspicious of an increasingly widespread application of biometric analysis and facial recognition tools without comprehensive legislation to regulate or define appropriate use. One example of a coordinated attempt to corral brewing collective discontent is Reclaim Your Face, a European initiative to ban biometric mass surveillance because of claims that it can lead to unnecessary or disproportionate interference with peoples fundamental rights.

When it comes to tackling these issues, legislation around enforcing ethics is one step. Another important step is increasing the diversity of the talent pool in AI so that a broader range of perspectives is factored into the sectors development. The World Economic Forum has shown that about 78% of global professionals with AI skills are male a gender gap triple the size of that in other industries.

Fortunately, progress is being made on this front.

A welcome number of companies are coming up with their own initiatives to counteract biases in their AI systems, especially around the area of career recruitment and using machine learning to automate CV approval processes.

In the past, traditional AI applications would be trained to stream resums and if there were any biases in the datasets, then the model would learn them and discriminate against candidates. It could be something like a female-sounding name on a CV that the system is streaming. Subsequently, the system would not permit the hiring of that potential candidate as an engineer because of some implicit human bias against that name, and the model would therefore discard the CV.

However, there are standard ways to prevent these biased outcomes if the data scientist is proactive about it during the training phase. For example, giving an even worse score if the prediction is wrong for female candidates or simply removing data such as names, ages and dates, which shouldnt influence hiring decisions. Although these countermeasures may come at the cost of making the AI less accurate on paper, when the system is deployed in a place that is serious about reducing bias, it will help move the needle in the right direction.

The fact that more innovations and businesses are becoming conscious of bias and using different methods such as the abovementioned example to overcome discrimination is a sign that we are moving in a more optimistic direction.

AI is going to play a much bigger part in our lives in the near future, but we need to do more to make sure that the outcomes are beneficial to the society we aim to serve. That can only happen if we continue to develop better use cases and prioritise diversity when creating AI systems.

This, along with meaningful regulations such as that imposed by the EU, will help to mitigate conscious and unconscious biases and deliver a better overall picture of the real-world issues we are trying to address.

Shawn Tan is CEO of global AI ecosystem builder Skymind

The rest is here:

Why new EU rules around artificial intelligence are vital to the development of the sector - ComputerWeekly.com

Posted in Artificial Intelligence | Comments Off on Why new EU rules around artificial intelligence are vital to the development of the sector – ComputerWeekly.com

Artificial Intelligence Selects Activision Blizzard As A Thematic Stock Highlight This Week – Forbes

Posted: at 4:23 am

getty

Tech stocks declined today following last weeks sell-off partly fueled by Elon Musk doing what he does best on Twitter with the Nasdaq NDAQ composite hitting losses of 1% in the early afternoon. Though prices may be dropping, we here at Q.ai cant think of a better time to go all-in on this weeks thematic screen: the tech Top Buys for the month of May.

Forbes AI Investor

Q.ai runs factor models daily to get the most up-to-date reading on stocks and ETFs. Our deep-learning algorithms use Artificial Intel INTC ligence (AI) technology to provide an in-depth, intelligence-based look at a company so you dont have to do the digging yourself.

Sign up for the free Forbes AI Investor newsletterhereto join an exclusive AI investing community and get premium investing ideas before markets open.

Activision Blizzard ATVI , the famous videogame company responsible for the popular Call of Duty franchise, bumped down 0.07% on Friday to $93.35 with 3.6 million trades on the books, though it remains up over 4.3% YTD.

The company peaked in Q1 following strong performance in their CoD franchise, spurred on by the introduction of mobile play that brought their active monthly user base to 150 million. Success from their King Digital and Blizzard Entertainment divisions saw games like Candy Crush and World of Warcraft prop up their performance and stock prices although the stock has trended down from their most recent high.

Activision 5-year performance

Activisions fiscal 2020 was a busy one: the company saw revenue increase by 6% to $8 billion, with operating income popping 6.7% to $2.8 billion almost a $1 billion increase over the last three years. EPS rose almost 5% to $2.82, though ROE fell from 17.7% to 15.7% in the three-year period. Currently, Activision is trading with a forward earnings ratio of 25x.

Due to its excellent performance and potential for expansion with such popular franchises as Call of Duty, our AI thinks fondly of Activision, rating the company As in Technicals and Low Volatility Momentum and Cs in Growth and Quality Value, bringing the videogame developer to Top Buy status for the month of May.

Qualcomm closed up 2.4% on Friday to $130.15 per share, banking 9.3 million trades by EOD. The semiconductor, software, and wireless technology services company is down almost 12% for the year.

Qualcomm 5-year performance

Qualcomms woes this month are not entirely their own the company is trending downward due to repeat rumblings that Apple AAPL is looking to release their own 5G modem by 2023, despite the iPhone makers licensing agreement with Qualcomm that extends through April of 2025. While this shouldnt be a surprise to investors (given that Apple bought their modem business from Intel way back in 2019), the market appears wary that this future problem could pose risks to Qualcomms bottom line.

But investors neednt worry. Qualcomm had a banner 2020 heading into the new fiscal year, with revenue up 25% to $23.5 billion and operating income exploding 43.2% to $6.23 billion. Their EPS also grew by 54.5% to $4.52 in per-share earnings, with ROE tripling to 94% from 31% three years ago.

Qualcomm is expected to experience modest revenue growth of 4.8% in the next twelve months, and is currently trading with a forward 12-month P/E of 16.15. Our AI believes that this is a company worth your investment dollars: Qualcomm, Inc QCOM is rated Top Buy for the month of May, with As in Quality Value and Growth and Cs for Technicals and Low Volatility Momentum.

Cirrus Logic, Inc CRUS closed up 1.6% on Friday, closing out the week at $74.55 per share and 354,000 trades. Though Friday ended the week on a good note, Cirrus Logic is still down 11.4% YTD, due in part to the semiconductor manufacturers fourth-quarter earnings that fell short of Wall Street estimates two weeks ago. The stock has been declining since on news that Q4 sales dropped 5%, with Q1 revenue guidance set 8% below Wall Street projections.

Cirrus Logic 5-year performance

But the company has a strong future with new opportunities upcoming, including a new integrated circuit that is likely to be a cornerstone of the flagship Apple phones in fall 2021. Cirrus Logic is also expecting a boost from their collaboration with Elliptic Labs, a global AI software company using Cirrus to certify and optimize their AI Virtual Smart Sensor Platform for smartphone, PC, and IoT customers.

Furthermore, Cirrus Logic experienced a solid 2020 despite the pandemic, with revenue up 5.8% to $1.28 billion. Their operating income grew by 25% in the same period to $195 million, though this is down from their $262 million operating income three years ago. All told, Cirrus Logix experienced per-share earnings growth of 27.5% to $2.64, with ROE ticking down to 13.46%.

Cirrus Logic, Inc. has been bolstered by their strong performance, climbing demand, and brand partnerships. With a forward 12-month P/E of 16.11, our AI sees good things in store for them, rating the semiconductor company Top Buy with As in Growth and Low Volatility Momentum and Bs in Technicals and Quality Value.

Keysight Technologies KEYS is one of those companies that doesnt crop up in common parlance often, but they perform a critical role: manufacturing the electronics test and measurement equipment and software that clears your favorite tech to be delivered to your doorstep. The company is slated to report second quarter fiscal 2021 results on 19 May and in anticipation of their expected favorable performance, we thought wed take a look at what their last year has brought to the table.

Keysight Technologies 5-year performance

Keysight Technologies closed up 1.3% on Friday to $139.85, bringing YTD gains to 6.6%. Their last year saw revenue growth of 2% to $4.2 billion, with operating income resting at $777 million a total rise of 4.6% over the fiscal twelve months.

Though EPS experienced slower growth in the last year, not even scratching 2%, the last three years have seen per-share earnings quadruple from $0.86 to $3.31, with ROE tripling to 19.9%. Keysight Technologies is currently trading with a forward 12-month P/E of 24.23.

Keysight Technologies robust 2020 performance benefited from continued momentum in smartphone processors, high-performance data center growth, and high-speed networking needs. These, combined with acceleration in chip design activity, are expected to positively contribute to their performance. And as remote work and online learning proliferate (with IBM planning to downscale its 50 million square foot building to a mere 150,608 on anticipation of permanent changes), Keysight has also bolstered their software testing capabilities.

Once again, our AI sees solid growth and possible chances for investor gains in Keysight Technologies, rating the company Top Buy for the month of May with Bs in Technicals and Growth and Cs in Low Volatility Momentum and Quality Value.

KLA Corporation KLAC closed up 3.33% on Friday, ending the week at $305.75 per share with 1.5 million trades on the books. The capital equipment company is up 18% YTD, despite recent declines in stock prices after reporting solid fiscal 2021 Q3 earnings two weeks ago, with total earnings up $1.8 billion.

Said President and CEO Rick Wallace at the time, KLAs March quarter results demonstrate strong momentum. We have seen a sharp increase in business levels in each of our major end markets, driven by secular demand trends across a broad range of semiconductor markets and applications.

KLA Corporation 5-year performance

KLA Corps positive 2021 Q3 earnings report followed a robust fiscal 2020, which saw the company increase revenue by 11% to $5.8 billion, with operating income growth topping out at 26% to almost $1.76 billion. Expansions in their bottom line also extended to per-share earnings, which ballooned 54.7% in the last fiscal year alone to $7.70. And though their ROE fell in the last three years, it remains around a hefty 45%.

KLA Corp is expected to continue this upward trend in future earnings reports, with forward revenue growth of 12.6% predicted over the next year. The company is currently trading with a forward 12-month P/E of 18.35.

With a year of growth behind them and a bright future going forward, our AI rates KLA Corp a Top Buy for May, with an A in Low Volatility Momentum, Bs in Growth and Quality Value, and D in Technicals.

Liked what you read? Sign up for our free Forbes AI Investor Newsletterhereto get AI driven investing ideas weekly. For a limited time, subscribers can join an exclusive slack group to get these ideas before markets open.

Original post:

Artificial Intelligence Selects Activision Blizzard As A Thematic Stock Highlight This Week - Forbes

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence Selects Activision Blizzard As A Thematic Stock Highlight This Week – Forbes

Artificial Intelligence and Machine Learning Drive the Future of Supply Chain Logistics – Supply and Demand Chain Executive

Posted: at 4:23 am

Artificial intelligence (AI) is more accessible than ever and is increasingly used to improve business operations and outcomes, not only in transportation and logistics management, but also in diverse fields like finance, healthcare, retail and others. An Oxford Economics and NTT DATA survey of 1,000 business leaders conducted in early 2020 reveals that 96% of companies were at least researching AI solutions, and over 70% had either fully implemented or at least piloted the technology.

Nearly half of survey respondents said failure to implement AI would cause them to lose customers, with 44% reporting their companys bottom line would suffer without it.

Simply put, AI enables companies to parse vast quantities of business data to make well-informed and critical business decisions fast. And, the transportation management industry specifically is using this intelligence and its companion technology, machine learning (ML), to gain greater process efficiency and performance visibility driving impactful changes bolstering the bottom line.

McKinsey research reveals that 61% of executives report decreased costs and 53% report increased revenues as a direct result of introducing AI into their supply chains. For supply chains, lower inventory-carrying costs, inventory reductions and lower transportation and labor costs are some of the biggest areas for savings captured by high volume shippers. Further, AI boost supply chain management revenue in sales, forecasting, spend analytics and logistics network optimization.

For the trucking industry and other freight carriers, AI is being effectively applied to transportation management practices to help reduce the amount of unprofitable empty miles or deadhead trips a carrier makes returning to domicile with an empty trailer after delivering a load. AI also identifies other hidden patterns in historical transportation data to determine the optimal mode selection for freight, most efficient labor resource planning, truck loading and stop sequences, rate rationalization and other process improvement by applying historical usage data to derive better planning and execution outcomes.

The ML portion of this emerging technology helps organizations optimize routing and even plan for weather-driven disruptions. Through pattern recognition, for instance, ML helps transportation management professionals understand how weather patterns affected the time it took to carry loads in the past, then considers current data sets to make predictive recommendations.

The Coronavirus disease (COVID-19) put a tremendous amount of pressure on many industries the transportation industry included but it also presented a silver lining -- the opportunity for change. Since organizations are increasingly pressed to work smarter to fulfill customers expectations and needs, there is increased appetite to retire inefficient legacy tools and invest in new processes and tech tools to work more efficiently.

Applying AI and ML to pandemic-posed challenges can be the critical difference between accelerating or slowing growth for transportation management professionals. When applied correctly, these technologies improve logistics visibility, offer data-driven planning insights and help successfully increase process automation.

Like many emerging technologies promising transformation, AI and ML have, in many cases, been misrepresented or worse, overhyped as panaceas for vexing industry challenges. Transportation logistics organizations should be prudent and perform due diligence when considering when and how to introduce AI and ML to their operations. Panicked hiring of data scientists to implement expensive, complicated tools and overengineered processes can be a costly boondoggle and can sour the perception of the viability of these truly powerful and useful tech tools. Instead, organizations should invest time in learning more about the technology and how it is already driving value for successful adopters in the transportation logistics industry. What are some steps a logistics operation should take as they embark on an AI/ML initiative?

Remember that the quality of your data will drive how fast or slow your AI journey will go. The lifeblood of an effective AI program (or any big data project) is proper data hygiene and management. Unfortunately, compiling, organizing and accessing this data is a major barrier for many. According to a survey conducted by OReilly, 70% of respondents report that poorly labeled data and unlabeled data are a significant challenge. Other common data quality issues respondents cited include poor data quality from third-party sources (~42%), disorganized data stores and lack of metadata (~50%) and unstructured, difficult-to-organize data (~44%).

Historically slow-to-adopt technology, the transportation industry has recently begun realizing the imperative and making up ground with 60% of an MHI and Deloitte poll respondents expecting to embrace AI in the next five years. Gartner predicts that by the end of 2024, 75% of organizations will move from piloting to operationalizing AI, driving a five times increase in streaming data and analytics infrastructures.

For many transportation management companies, accessing, cleansing and integrating the right data to maximize AI will be the first step. AI requires large volumes of detailed data and varied data sources to effectively identify models and develop learned behavior.

Before jumping on the AI bandwagon too quickly, companies should assess the quality of their data and current tech stacks to determine what intelligence capabilities are already embedded.

And, when it comes to investing in newer technologies to pave the path toward digital transformation, choose AI-driven solutions that do not require you to become a data scientist.

If youre unsure how to start, consider partnering with a transportation management system (TMS) partner with a record of experience and expertise in applying AI to transportation logistics operations.

Here is the original post:

Artificial Intelligence and Machine Learning Drive the Future of Supply Chain Logistics - Supply and Demand Chain Executive

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence and Machine Learning Drive the Future of Supply Chain Logistics – Supply and Demand Chain Executive

Being smart with artificial intelligence: deploying AI in the public sector – Global Government Forum

Posted: at 4:23 am

Assisting, not replacing: AI can amplify the work of officials but the webinar panellists agreed that it should not displace them. Image by Pixabay

Nothing better illustrates the need for governments to harness artificial intelligence (AI) than the challenge faced by Germanys Ministry of Labour and Social Affairs. In the next 15 years, 30% of its staff will retire and become users of the agencys pension system instead. This twin dynamic of staff retrenchment and greater demand for services sums up why governments across the world are so keen to embrace technology: how else to deliver more with fewer resources?

The demand that is really driving the diffusion of AI is demographic change, Michael Schnstein, head of strategic foresight and analysis at the ministrys Policy Lab Digital, told a Global Government Forum webinar in April.

Rather than following a predetermined set of rules, AI algorithms learn how to achieve their mission by identifying connections in large data sets. At its best, AI can revolutionise service delivery scanning a patent application to make an instantaneous adjudication without the need for human intervention, for example.

But the risks are also substantial. It can be hard to ensure that services are fully accountable and transparent if highly-complex, fast-evolving algorithms lie at their heart. Development can be costly, requiring the assembly and standardisation of huge quantities of data, and any biases in that data can be reproduced in faulty decision-making. As a result, many senior civil servants are understandably nervous about adoption.

In Germany, Schnstein has been able to launch some services that are completely AI-driven: a bot that uses text, image and speech recognition to help businesses register new employees for the countrys social security system, for example. But there are limits to how fast he can move, he commented.

In many cases, German legislation requires a signature to be applied before a government decision can be taken, limiting where AI can be applied. Works councils and trades unions must approve any new services in the areas of pensions and social security, complicating sign off of proposed deployments. And even for processes where AI has been applied, a human confirmation step may still be required.

As a result, 80-90% of AI deployments by the Ministry of Labour involve the improvement of individual steps within existing administrative processes, Schnstein said. The largest single field of experimentation has involved the existing child benefit application process, rather than any greenfield services.

This approach made sense to the other panellists, who argued that governments should move forward cautiously and by delivering practical, measurable improvements to existing services often assisting human decision-making, rather than replacing it entirely.

Very often, when we see a [new] technology stack such as AI, its very alluring to just start playing with it and make predictions and all kinds of awesome recommendations, said Dr Vik Pant, chief scientist and chief science advisor at Natural Resources Canada. But really the question [should be]: how does this map to the priorities and plans of our department?

Pant leads an accelerator within Natural Resources Canada. These institutions, borrowed from the world of startups and venture capital, involve interdisciplinary teams coming together to push forward a promising idea as fast as possible.

That might sound like the antithesis of gradual, pragmatic change. But Pant is keen to point out that the accelerator team work closely with the operational and frontline civil servants whose problems theyre trying to solve co-creating solutions alongside subject matter experts. This ensures both that the AI tool will closely match the requirement, and that service managers and elected leaders are familiar with its operation and characteristics.

Ensuring full line of sight for our departmental decision-makers into the rationales, the way we make choices, the way we make decisions, how we evaluate projects, [means] we can get buy-in from all of the leadership within our department, Pant told the online audience.

Nadun Muthukumarana, a data analytics partner at the events knowledge partner, consultancy Deloitte, was equally adamant that technologists must collaborate closely with policy and service managers rather than building AI products in splendid isolation. A lot of applications of AI fail [because] somebody is doing it as a proof of concept or an experiment, but when you try to scale it up to delivering services to millions of citizens it doesnt work, he said.

He advises his public sector clients to only consider using AI where enterprise tooling exists, ensuring that algorithms can be embedded in software able to process the extremely high volumes of transactions required in many government services. And Muthukumarana also emphasised the need for civil servants to cooperate across organisational boundaries, assembling data sets of sufficient scale and quality to support the effective use of AI.

In the United States, elected representatives have the resources and structures to hold the executive arm of government to account and this has led to heightened scrutiny of the use of AI. Taka Ariga, chief data scientist and director of the Innovation Lab at the US Government Accountability Office (GAO), explained that hes leading investigations into a number of cases where AI has already been deployed by American agencies.

He compared the current capabilities of AI unfavourably to those of a small child: Google Translate can help him navigate museums and restaurants in Paris, he noted, but remains unaware of the context of conversations meaning that its translations lack nuance. These constraints, he argued, should limit both AIs deployment and the weight put on its results. In many conversations with government digital professionals, he recalled, hes been assured that their models have been designed to squeeze out bias. But in his view, its still not clear how you operationalise do no evil creating systems that can both guarantee and demonstrate equity in outcomes. From a GAO perspective, we are in the business of verification, said Ariga. We would love to trust those AI implementations, but as an oversight entity, we want to see evidence.

Generating this evidence is becoming easier as the technology advances, commented Pant. People always talk in terms of black boxes, he said: algorithms whose operation has evolved to the point where decision-making becomes opaque to the systems managers. But just as much as there have been these advances in algorithms and models, there have been commensurate improvements also in explainability; in interpretability. So even as AI systems become more complex, their developers are finding new ways to maintain oversight of how individual decisions are being made.

Pants staff already produce documents explaining AI systems to senior civil servants and decision makers, he added, and these could also be distributed to regulators.

Regulation looks set to dominate the debate around AI for some time, and there is some concern that premature and overly-prescriptive rules could dampen innovation. When Schnstein recruited a specialist technology team and came up with proposals for several areas where AI implementations could be explored, he recalled, that immediately made our political leadership a little bit nervous.

The German government had already signed up to guidelines from global organisations governing the ethical use of AI, and his superiors were worried the proposed experiments might run counter to those. So Schnsteins team ended up working on two projects simultaneously developing guidelines on how to develop systems that comply with those standards, even as they built new AI systems for use within government. Wed have liked to do these sequentially, in an ideal world, he said.

Schnstein suggested that in many cases it will be sufficient to apply existing laws to AI, rather than passing new legislation focusing on the emerging technology. Youre not meant to discriminate when you hire people. Thats already the law, [so] what we need is to make sure the current law is applied when you use an AI-enabled recruitment system, he said.

But regulators worry that technological changes will render existing governance frameworks obsolete, said Ariga. Too often, oversight entities are playing catch-up, he commented. Theres plenty of examples out there where we wait until a certain maturity of technology before we dive into the accountability implications.

One way forward, said Muthukumarana, may be to vary regulatory scrutiny and compliance requirements between industrial sectors tailoring oversight to the risk involved. Self-driving cars, which make thousands of life-or-death decisions every day, might need greater scrutiny than an AI deployed to produce transcripts of conferences, he commented.

I think different industries will have domain-specific frameworks, said Muthkumarana. There might be some universal truths that can be incorporated. But a lot of these things will be developed on a domain-by-domain basis.

See original here:

Being smart with artificial intelligence: deploying AI in the public sector - Global Government Forum

Posted in Artificial Intelligence | Comments Off on Being smart with artificial intelligence: deploying AI in the public sector – Global Government Forum

Airport of the Future: Houston’s Hobby Airport Uses Artificial Intelligence to Provide a Next Gen Travel Experience – Business Wire

Posted: at 4:23 am

SUNNYVALE, Calif.--(BUSINESS WIRE)--With the help of next generation technology, passengers at William P. Hobby Airport HOU, will get an upgrade to their travel experience, which includes live journey and wait times for passenger and social distance monitoring.

We want our passengers to feel empowered by the technology we implement throughout their travel experience, Houston Airport Director IT Program Management Diego Parra said. This technology not only helps passenger know what to expect at certain points in their journey, but it also provides us with valuable information to keep them safe. When we see a congested area that needs greater social distancing, we can respond.

The technology is powered by advanced artificial intelligence and machine learning through LiveReach Media (LRM), a comprehensive motion analytics and digital out-of-home marketing platform. The motion analytics system integrates into Houston Airports existing technology infrastructure to measure passenger throughput and provide accurate live wait times at TSA and immigration checkpoints. Existing or new monitors can also display live journey times to restaurants and retail locations inside the airport.

Airports, especially award-winning ones such as Houston, clearly see the need of becoming more data driven but widespread adoption of motion analytics has been limited due to the lack of scalability or the large upfront capital investment associated with traditional solutions. Weve democratized analytics for all airports, large and small, with our single-click integration or our easy-to-deploy sensing options. Abhi Jain, Co-Founder of LiveReach Media

Airports across the United States, including Des Moines International and El Paso International, have selected LiveReach Medias system due to its premium performance and simple deployment and most recently, in April of 2021, LiveReach Media was selected by Philadelphia International to provide Queue Management and Content Management Services via a public bidding process.

This is just the start, Parra said. Houston Airports is looking to expand LiveReach Medias comprehensive analytics and advanced artificial intelligence to become the airport system of the future.

About Houston Airports

Houston Airports is the City of Houstons Department of Aviation. Comprised of George Bush Intercontinental Airport (IAH), William P. Hobby Airport (HOU) and Ellington Airport (EFD) / Houston Spaceport, Houston Airports served nearly 25 million passengers in 2020 and nearly 60 million passengers in 2019. Houston Airports forms one of North America's largest public airport systems and positions Houston as the international passenger and cargo gateway to the South-Central United States and as a primary gateway to Latin America. Houston is proud to be the only city in the Western Hemisphere with two Skytrax rated 4-star airports.

About LiveReach Media

Trusted by large grocery chains, transportation hubs, and retail stores around the world, LiveReach Medias comprehensive motion analytics and digital out-of-home marketing platform helps venues operate more effectively, engage with their customers at scale, and create better and safer customer experiences. LiveReach Media is headquartered in Sunnyvale, California with several offices globally.

Original post:

Airport of the Future: Houston's Hobby Airport Uses Artificial Intelligence to Provide a Next Gen Travel Experience - Business Wire

Posted in Artificial Intelligence | Comments Off on Airport of the Future: Houston’s Hobby Airport Uses Artificial Intelligence to Provide a Next Gen Travel Experience – Business Wire

New Draft Rules on the Use of Artificial Intelligence – Lexology

Posted: at 4:23 am

On 21 April 2021, the European Commission published draft regulations (AI Regulations) governing the use of artificial intelligence (AI). The European Parliament and the member states have not yet adopted these proposed AI Regulations.

The proposed AI Regulations:

In more detail

The European Commission's proposed AI Regulations are the first attempt the world has seen at creating a uniform legal framework governing the use, development and marketing of AI. They will likely have a resounding impact on all businesses that use AI for years to come.

Scope

The AI Regulations will apply to the following:

Timing

The AI Regulations will become effective 20 days after publication in the Official Journal. They will then be need to be implemented within 24 months, with some provisions going into effect sooner. The long implementation period increases the risk that some provisions will become irrelevant or moot because of technological developments.

Risk-based approach

In its 2020 white paper, the European Commission proposed splitting the AI ecosystem into two general categories: high risk or low risk. The European Commission's new graded system is more nuanced and likely to ensure a more targeted approach, since the level of compliance requirements matches the risk level of a specific use case.

The new AI Regulations follow a risk-based approach and differentiate between the following: (i) prohibited AI systems whose use is considered unacceptable and that contravene union values (e.g., by violating fundamental rights); (ii) uses of AI that create a high risk; (iii) those which create a limited risk (e.g., where there is a risk of manipulation, for instance via the use of chatbots)); and (iv) uses of AI that create minimal risk.

Under the requirements of the new AI Regulations, the greater the potential of algorithmic systems to cause harm, the more far-reaching the intervention. Limited risk uses of AI face minimal transparency requirements and minimal risk uses can be developed and used without additional legal obligations. However, makers of "limited" or "minimal" risk AI systems will be encouraged to adopt non-legally binding codes of conduct. The "high risk" uses will be subject to specific regulatory requirements before and after launching into the market (e.g., ensuring the quality of data sets used to train AI systems, applying a level of human oversight, creating records to enable compliance checks and providing relevant information to users). Some obligations may also apply to distributors, importers, users or any other third parties, thus affecting the entire AI supply chain.

Enforcement

Member states will be responsible for enforcing these regulations. Penalties for noncompliance are up to 6% of global annual turnover or EUR 30 million, whichever is greater.

Criticism of exemptions for law enforcement

Overly broad exemptions for law enforcement's use of remote biometric surveillance have been the target of criticism. There are also concerns that the AI Regulations do not go far enough to address the risk of possible discrimination by AI systems.

Although the AI Regulations detail prohibited AI practices, some find there are too many problematic exceptions and caveats. For example, the AI Regulations create an exception for narrowly defined law enforcement purposes such as searching for a missing child or a wanted individual or preventing a terror attack. In response, some EU lawmakers and digital rights groups want the carve-out removed due to fears authorities may use it to justify the widespread future use of the technology, which can be intrusive and inaccurate.

Support

The AI Regulations include measures supporting innovation such as setting up regulatory sandboxes. These facilitate the development, testing and validation of innovative AI systems for a limited time before their placement on the market, under the direct supervision and guidance of the competent authorities to ensure compliance.

Database

According to the AI Regulations, the European Commission will be responsible for setting up and maintaining a database for high-risk AI practices (Article 60). The database will contain data on all stand-alone AI systems considered high-risk. To ensure transparency, all information processed in the database will be accessible to the public.

It remains to be seen whether the European Commission will extend this database to low-risk practices to increase transparency and enhance the possibility of supervision for practices that are not high-risk initially but may become so at a later stage.

Employment-specific observations

1. High-risk

AI practices that involve employment, worker management and access to self-employment are considered high-risk. These high-risk systems specifically include the following AI systems:

2. Biases

According to the AI Regulations, the training, validation and testing of data sets must be subject to appropriate data governance and management practices, including in relation to possible biases. Providers of continuously learning high-risk AI systems must ensure that possibly biased outputs are equipped with proper mitigation measures if they will be used as input in future operations (feedback loops).

However, the AI Regulations are unclear on how AI systems will be tested for possible biases, specifically whether the benchmark will be equality in opportunity or equality in outcomes. Companies should consider how these systems might affect individuals with disabilities and individuals at the intersection of multiple social groups.

3. Processing of special categories of personal data to mitigate biases is permissible

The AI Regulations carve out an exception allowing AI providers to process special categories of personal data if it is strictly necessary to ensure bias monitoring, detection and correction. However, AI providers processing this personal data are still subject to appropriate safeguards for the fundamental rights and freedoms of natural persons (e.g., technical limitations on the reuse and use of state-of-the-art security and privacy-preserving measures such as pseudonymization or encryption). It remains to be seen whether individuals will sufficiently trust these systems to provide them with their sensitive personal data.

4. Human oversight

High-risk AI systems must be capable of human oversight. Individuals tasked with oversight must:

As indicated in our recent Trust Continuum report, this will require substantial involvement from the human decision-maker (in practice, often an individual from HR), which proves to be challenging for most companies.

Beyond the EU

We have already noted the potential for extra-territorial effect of the AI Regulations. But many AI systems will not be caught if they have no EU nexus. The EU is, as it often is, at the vanguard of governmental intervention into protection of human rights - it is the first to lay down a clear marker of expectations around use of AI. But the issue is under review in countries across the world. In the UK, the Information Commissioner has published Guidance on AI and data protection. In the US, a number of states have introduced draft legislation governing use of AI. None have quite the grand plan feel of the EU's AI Regulations, but there will certainly be more to follow.

Read the original:

New Draft Rules on the Use of Artificial Intelligence - Lexology

Posted in Artificial Intelligence | Comments Off on New Draft Rules on the Use of Artificial Intelligence – Lexology

BBVA AI Factory, among the world’s best financial innovation labs, according to Global Finance – BBVA

Posted: at 4:23 am

The AI Factory is the global development center where BBVA builds its artificial intelligence capabilities. Its mission is to help create data products adapted to the needs of an increasingly digital population and to position BBVA as a leading player in the new worlds banking scene.

Optimizing remote agents work to improve customer service or providing BBVA teams with key knowledge to detect fraud. These are just some of the areas that have benefited from the work with data that began at BBVA more than a decade ago and gave way to the creation of the Artificial Intelligence Factory in 2019. Today the team has 50 professionals from different disciplines: data scientists, engineers, software developers, data architects, and business translators, that is, professionals who serve as a bridge between analytical capabilities and business needs. In addition, the number of people working under the BBVA AI Factory project umbrella is close to 200, including professionals from the BBVA Group.

BBVA AI Factory is one of the financial sectors largest global bets seizing the opportunities of the data age for everyone. This recognition is a boost to the companys strategic approach, which seeks to maximize the value we generate in the Group, aligning our objectives with BBVAs strategic priorities, says Francisco Maturana, BBVA AI Factorys CEO. For this it is important to adopt a product company mindset, creating reusable and multipurpose solutions and adapting our operating model to the banks needs.

The AI Factory is also one of the teams involved in the improvements to BBVAs app functionalities, in order to achieve a much more personalized experience for clients based on artificial intelligence capabilities.

The financial innovation labs on Global Finances annual list are where tomorrows solutions are being incubated, said Joseph Giarraputo, Global Finances editorial director.

Global Finances Innovator Awards list of the worlds best financial innovation labs aims to reward the worlds most innovative financial institutions, as well as the most innovative products and services.

Read the original:

BBVA AI Factory, among the world's best financial innovation labs, according to Global Finance - BBVA

Posted in Artificial Intelligence | Comments Off on BBVA AI Factory, among the world’s best financial innovation labs, according to Global Finance – BBVA

New artificial intelligence regulations have important implications for the workplace – Workplace Insight

Posted: at 4:23 am

The European Commission recently announced its proposal for the regulation of artificial intelligence, looking to ban unacceptable uses of artificial intelligence. Up until now, the challenges for businesses getting AI wrong were bad press, reputation damage, loss of trust and market share, and most importantly for sensitive applications, harm to individuals. But with these new rules, two new consequences are arising: plain interdiction of certain AI systems, and GDPR-like fines.

While for now this is only proposed for the EU, the definitions and principles set out may have wider-reaching implications, not only on how AI is perceived but also on how businesses should handle and work with AI. The new regulation sets four levels of risk: unacceptable, high, low, minimum, with HR AI systems sitting in the High Risk category.

The use of AI for hiring and firing has already stirred up some controversy, with Uber and Uber Eats among the latest companies to have made headlines for AI unfairly dismissing employees. It is precisely due to the far reaching impact of some HR AI applications, that it has been categorised as high risk. Afterall, a key purpose of the proposal is to ensure that fundamental human rights are upheld.

Yet, despite the bumps on the road and the focus on the concerns, it needs to be remembered that AI is in fact the best means for helping remove discrimination and bias if the AI is ethical. Continue to replicate the same traditional approaches and processes as found in existing data, and well definitely repeat the same discriminations, even unconsciously. Incorporate ethical and regulatory considerations into the development of AI systems, and Im convinced we will make a great step forward. We need to remember that the challenges lie with how AI is developed and used, not the actual technology itself. This is precisely the issue the EU proposal is looking to address.

AI, let alone ethical AI, is still not fully understood and there is an important education piece that needs to be undertaken. From the data engineers and data scientists to those in HR using the technology, the purpose, how and why the AI is being used must be understood to ensure it is being used as intended. HR also needs a level of comprehension of the algorithm itself to identify if those intentions are not being followed.

Defining the very notion of what is ethical is not that simple, but regulations like the one proposed by the EU, codes of conducts, data charts and certifications will help us move towards generally shared concepts of what is and isnt acceptable, helping to create ethical frameworks for the application of AI and ultimately, greater trust.

These are no minor challenges, but the HR field has an unique opportunity to lead the effort and prove that ethical AI is possible, for the greater good of organisations and individuals.

Original post:

New artificial intelligence regulations have important implications for the workplace - Workplace Insight

Posted in Artificial Intelligence | Comments Off on New artificial intelligence regulations have important implications for the workplace – Workplace Insight

The future of artificial intelligence and its impact on the economy – Brookings Institution

Posted: at 4:23 am

Advances in artificial intelligence are likely to herald an unprecedented period of rapid innovation and technological change, which will fundamentally alter current industries and economy. What is different to previous periods of technological progress is the speed at which these developments are happening and the extent to which they will shape markets around the world. How will they affect prosperity and inequality? How can AI be deployed for the greater good and to improve economic outcomes?

On Thursday, May 20, Sanjay Patnaik, director of the Center on Regulation and Markets (CRM) at Brookings, will sit down with Katya Klinova, the head of AI, labor, and the economy at the Partnership on AI to explore these questions and many others. Klinova focuses on studying the mechanisms for steering AI progress towards greater equality of opportunity and improving the working conditions along the AI supply chain. She previously worked at the UN Executive Office of the Secretary-General (SG) on preparing the launch of the SGs Strategy for New Technology, and at Google in a variety of managerial roles in Chrome, Play, Developer Relations, and Search departments, where she was responsible for launching and driving the worldwide adoption of Googles early AI-enabled services.

Viewer can submit questions for speakers by emailing events@brookings.edu or via Twitter using #AIGovernance.

This event is part of CRMs Reimagining Modern-day Markets and Regulations series, which focuses on analyzing rapidly changing modern-day markets and on how to regulate them most effectively.

Link:

The future of artificial intelligence and its impact on the economy - Brookings Institution

Posted in Artificial Intelligence | Comments Off on The future of artificial intelligence and its impact on the economy – Brookings Institution

Page 87«..1020..86878889..100110..»