USC’s Biggest Wins in Computing and AI – USC Viterbi | School of Engineering – USC Viterbi School of Engineering

USC has been an animating force for computing research since the late 1960s.

With the advent of the USC Information Sciences Institute (ISI) in 1972 and the Department of Computer Science in 1976 (born out of the Ming Hsieh Department of Electrical and Computer Engineering), USC has played a propulsive role in everything from the internet to the Oculus Rift to recent Nobel Prizes.

Here are seven of those victories reimagined as cinemagraphs still photographs animated by subtle yet remarkable movements.

Cinemagraph: Birth of .Com

1. The Birth of the .com (1983)

While working at ISI, Paul Mockapetris and Jon Postel pioneered the Domain Name System, which introduced the .com, .edu, .gov and .org internet naming standards.

As Wired noted on the 25th anniversary, Without the Domain Name System, its doubtful the internet could have grown and flourished as it has.

The DNS works like a phone book for the internet, automatically translating text names, which are easy for humans to understand and remember, to numerical addresses that computers need. For example, imagine trying to remember an IP address like 192.0.2.118 instead of simply usc.edu.

In a 2009 interview with NPR, Mockapetris said he believed the first domain name he ever created was isi.edu for his employer, the (USC) Information Sciences Institute. That domain name is still in use today.

Grace Park, B.S. and M.S. 22 in chemical engineering, re-creates Len Adlemans famous experiment.

2. The Invention of DNA Computing (1994)

In a drop of water, a computation took place.

In 1994, Professor Leonard Adleman, who coined the term computer virus, invented DNA computing, which involves performing computations using biological molecules rather than traditional silicon chips.

Adleman who received the 2002 Turing Award, often called the Nobel Prize of computer science saw that a computer could be something other than a laptop or machine using electrical impulses. After visiting a USC biology lab in 1993, he recognized that the 0s and 1s of conventional computers could be replaced with the four DNA bases: A, C, G and T. As he later wrote, a liquid computer can exist in which interacting molecules perform computations.

As the New York Times noted in 1997: Currently the worlds most powerful supercomputer sprawls across nearly 150 square meters at the U.S. governments Sandia National Laboratories in New Mexico. But a DNA computer has the potential to perform the same breakneck-speed computations in a single drop of water.

Weve shown by these computations that biological molecules can be used for distinctly non-biological purposes, Adleman said in 2002. They are miraculous little machines. They store energy and information, they cut, paste and copy.

Professor Maja Matari with Blossom, a cuddly, robot companion to help people with anxiety and depression practice breathing exercises and mindfulness.

3. USC Interaction Lab Pioneers Socially Assistive Robotics (2005)

Named No. 5 by Business Insider as one of the 25 Most Powerful Women Engineers in Tech, Maja Matari leads the USC Interaction Lab, pioneering the field of socially assistive robotics (SAR).

As defined by Matari and her then-graduate researcher David Feil-Seifer 17 years ago, socially assistive robotics was envisioned as the intersection of assistive robotics and social robotics, a new field that focuses on providing social support for helping people overcome challenges in health, wellness, education and training.

Socially assistive robots have been developed for a broad range of user communities, including infants with movement delays, children with autism, stroke patients, people with dementia and Alzheimers disease, and otherwise healthy elderly people.

We want these robots to make the user happier, more capable and better able to help themselves, said Matari, the Chan Soon-Shiong Chair and Distinguished Professor of Computer Science, Neuroscience and Pediatrics at USC. We also want them to help teachers and therapists, not remove their purpose.

The field has inspired investments from federal funding agencies and technology startups. The assistive robotics market is estimated to reach $25.16 billion by 2028.

Is the ball red or blue? Is the cat alive or dead? Professor Daniel Lidar, one of the worlds top quantum influencers, demonstrates the idea of superposition.

4. First Operational Quantum Computing System in Academia (2011)

Before Google or NASA got into the game, there was the USC-Lockheed Martin Quantum Computing Center (QCC).

Led by Daniel Lidar, holder of the Viterbi Professorship in Engineering, and ISIs Robert F. Lucas (now retired), the center launched in 2011. With the worlds first commercial adiabatic quantum processor, the D-Wave One, USC is the only university in the world to host and operate a commercial quantum computing system.

As USC News noted in 2018, quantum computing is the ultimate disruptive technologyit has the potential to create the best possible investment portfolio, dissolve urban traffic jams and bring drugs to market faster. It can optimize batteries for electric cars, predictions for weather and models for climate change.Quantum computing can do this, and much more, because it can crunch massive data and variables and do it quickly with advantage over classical computers as problems get bigger.

Recently, QCC upgraded to D-Waves Advantage system, with more than 5,000 qubits, an order of magnitude larger than any other quantum computer. The upgrades will enable QCC to host a new Advantage generation of quantum annealers from D-Wave and will be the first Leap quantum cloud system in the United States. Today, in addition to Professor Lidar one of the worlds top quantum computing influencers QCC is led by Research Assistant Professor Federico Spedalieri, as operations director, and Research Associate Professor Stephen Crago, associate director of ISI.

David Traum, a leader at the USC Institute for Creative Technologies (ICT), converses with Pinchas Gutter, a Holocaust survivor, as part of the New Dimensions in Testimony.

5. USC ICT Enables Talking with the Pastin the Future (2015)

New Dimensions in Testimony, a collaboration between the USC Shoah Foundation and the USC Institute for Creative Technologies (ICT), in partnership with Conscience Display, is an initiative to record and display testimony in a way that will continue the dialogue between Holocaust survivors and learners far into the future.

The project uses ICTs Light Stage technology to record interviews using multiple high-end cameras for high-fidelity playback. The ICT Dialogue Groups natural language technology allows fluent, open-ended conversation with the recordings. The result is a compelling and emotional interactive experience that enables viewers to ask questions and hear responses in real-time, lifelike conversation even after the survivors have passed away.

New Dimensions in Testimony debuted in the Illinois Holocaust Museum & Education Center in 2015. Since then, more than 50 survivors and other witnesses have been recorded and presented in dozens of museums around the United States and the world. It remains a powerful application of AI and graphics to preserve the stories and lived experiences of culturally and historically significant figures.

Eric Rice and Bistra Dilkina are co-directors of the Center for AI in Society (CAIS), a remarkable collaboration between the USC Dworak-Peck School of Social Work and the USC Viterbi School of Engineering.

6. Among the First AI for Good Centers in Higher Education (2016)

Launched in 2016, the Center for AI in Society (CAIS) became one of the pioneering AI for Good centers in the U.S., uniting USC Viterbi and the USC Suzanne Dworak-Peck School of Social Work.

In the past, CAIS used AI to prevent the spread of HIV/AIDS among homeless youth. In fact, a pilot study demonstrated a 40% increase in homeless youth seeking HIV/AIDS testing due to an AI-assisted intervention. In 2019, the technology was also used as part of the largest global deployment of predictive AI to thwart poachers and protect endangered animals.

Today, CAIS fuses AI, social work and engineering in unique ways, such as working with the Los Angeles Homeless Service Authority to address homelessness; battling opioid addiction; mitigating disasters like heat waves, earthquakes and floods; and aiding the mental health of veterans.

CAIS is led by co-directors Eric Rice, a USC Dworak-Peck professor of social work, and Bistra Dilkina, a USC Viterbi associate professor of computer science and the Dr. Allen and Charlotte Ginsburg Early Career Chair.

Pedro Szekely, Mayank Kejriwal and Craig Knoblock of the USC Information Sciences Institute (ISI) are at the vanguard of using computer science to fight human trafficking.

7. AI That Fights Modern Slavery (2017)

Beginning in 2017, a team of researchers at ISI led by Pedro Szekely, Mayank Kejriwal and Craig Knoblock created software called DIG that helps investigators scour the internet to identify possible sex traffickers and begin the process of capturing, charging and convicting them.

Law enforcement agencies across the country, including in New York City, have used DIG as well as other software programs spawned by Memex, a Defense Advanced Research Projects Agency (DARPA)-funded program aimed at developing internet search tools to help investigators thwart sex trafficking, among other illegal activities. The specialized software has triggered more than 300 investigations and helped secure 18 felony sex-trafficking convictions, according to Wade Shen, program manager in DARPAs Information Innovation Office and Memex program leader. It has also helped free several victims.

In 2015, Manhattan District Attorney Cyrus R. Vance Jr. announced that DIG was being used in every human trafficking case brought by the DAs office. With technology like Memex, he said, we are better able to serve trafficking victims and build strong cases against their traffickers.

This is the most rewarding project Ive ever worked on, said Szekely. Its really made a difference.

Published on July 28th, 2022

Last updated on July 28th, 2022

More here:
USC's Biggest Wins in Computing and AI - USC Viterbi | School of Engineering - USC Viterbi School of Engineering

Quantum Computing in Transportation Market 2022 Industry Analysis by Geographical Regions, Type and Application, Forecast to 2028 Shanghaiist -…

MarketsandResearch.biz led the statistical surveying on Global Quantum Computing in Transportation Market considering a very long time from 2022- 2028. The exploration philosophies followed are essence and intend to give a more profound picture of the continuous just as approaching changes that the Quantum Computing in Transportation market is and will be exposed to in the previously mentioned conjecture time frame.

The report tries to consider its clients requests who are meaning to make inferences for the Quantum Computing in Transportation market. Thusly, it is a completely painted portrayal of these above- referenced key portions in country-wise examination also.

DOWNLOAD FREE SAMPLE REPORT: https://www.marketsandresearch.biz/sample-request/239446

It further feels free to take into regard both subjective, quantitative parts of the Quantum Computing in Transportation market. The subjective area incorporates data about market main impetuses, possibilities, and client requests and necessities, which assists organizations with growing new systems to contend over the long haul. The quantitative part of the report, on the other hand, contains the most dependable industry information screened completely by experts to infer deductions by its inspectors. Some exceptionally summed up wellsprings of data are taken into utilization. Articles, (yearly) reports, data sets, both of government & NGOs, data accumulated from industry-specialists, advisors, etc structure the reports substance. The numbers that have been taken are based on common research assumptions varying from region to region.

The report has been expanded into four significant regions relying upon the item under study:

Central parts of the market

Nations covered

Type

ACCESS FULL REPORT: https://www.marketsandresearch.biz/report/239446/global-quantum-computing-in-transportation-market-2021-by-company-regions-type-and-application-forecast-to-2026

Application

Customization of the Report:

This report can be customized to meet the clients requirements. Please connect with our sales team (sales@marketsandresearch.biz), who will ensure that you get a report that suits your needs. You can also get in touch with our executives on 1-201-465-4211 to share your research requirements.

Contact UsMark StoneHead of Business DevelopmentPhone: 1-201-465-4211Email: sales@marketsandresearch.bizWeb: http://www.marketsandresearch.biz

Go here to see the original:
Quantum Computing in Transportation Market 2022 Industry Analysis by Geographical Regions, Type and Application, Forecast to 2028 Shanghaiist -...

CXL Brings Datacenter-sized Computing with 3.0 Standard, Thinks Ahead to 4.0 – HPCwire

A new version of a standard backed by major cloud providers and chip companies could change the way some of the worlds largest datacenters and fastest supercomputers are built.

The CXL Consortium on Tuesday announced a new specification called CXL 3.0 also known as Compute Express Link 3.0 that eliminates more chokepoints that slow down computation in enterprise computing and datacenters.

The new spec provides a communication link between chips, memory and storage in systems, and it is two times faster than its predecessor called CXL 2.0.

CXL 3.0 also has improvements for more fine-grained pooling and sharing of computing resources for applications such as artificial intelligence.

CXL 3.0 is all about improving bandwidth and capacity, and can better provision and manage computing, memory and storage resources, said Kurt Lender, the co-chair of the CXL marketing work group (and senior ecosystem manager at Intel), in an interview with HPCwire.

Hardware and cloud providers are coalescing around CXL, which has steamrolled other competing interconnects. This week, OpenCAPI, an IBM-backed interconnect standard, merged with CXL Consortium, following the footsteps of Gen-Z, which did the same in 2020.

CXL released the first CXL 1.0 specification in 2019, and quickly followed it up with CXL 2.0, which supported PCIe 5.0, which is found in a handful of chips such as Intels Sapphire Rapids and Nvidias Hopper GPU.

The CXL 3.0 spec is based on PCIe 6.0, which was finalized in January. CXL has a data transfer speed of up to 64 gigatransfers per second, which is the same as PCIe 6.0.

The CXL interconnect can link up chips, storage and memory that are near and far from each other, and that allows system providers to build datacenters as one giant system, said Nathan Brookwood, principal analyst at Insight 64.

CXLs ability to support the expansion of memory, storage and processing in a disaggregated infrastructure gives the protocol a step-up over rival standards, Brookwood said.

Datacenter infrastructures are moving to a decoupled structure to meet the growing processing and bandwidth needs for AI and graphics applications, which require large pools of memory and storage. AI and scientific computing systems also require processors beyond just CPUs, and organizations are installing AI boxes, and in some cases, quantum computers, for more horsepower.

CXL 3.0 improves bandwidth and capacity with better switching and fabric technologies, CXL Consortiums Lender said.

CXL 1.1 was sort of in the node, then with 2.0, you can expand a little bit more into the datacenter. And now you can actually go across racks, you can do decomposable or composable systems, with the fabric technology that weve brought with CXL 3.0, Lender said.

At the rack level, one can make CPU or memory drawers as separate systems, and improvements in CXL 3.0 provide more flexibility and options in switching resources compared to previous CXL specifications.

Typically, servers have a CPU, memory and I/O, and can be limited in physical expansion. In disaggregated infrastructure, one can take a cable to a separate memory tray through a CXL protocol without relying on the popular DDR bus.

You can decompose or compose your datacenter as you like it. You have the capability of moving resources from one node to another, and dont have to do as much overprovisioning as we do today, especially with memory, Lender said, adding its a matter of you can grow systems and sort of interconnect them now through this fabric and through CXL.

The CXL 3.0 protocol uses the electricals of the PCI-Express 6.0 protocol, along with its protocols for I/O and memory. Some improvements include support for new processors and endpoints that can take advantage of the new bandwidth. CXL 2.0 had single-level switching, while 3.0 has multi-level switching, which provides more latency on the fabric.

You can actually start looking at memory like storage you could have hot memory and cold memory, and so on. You can have different tiering and applications can take advantage of that, Lender said.

The protocol also accounts for the ever-changing infrastructure of datacenters, providing more flexibility on how system administrators want to aggregate and disaggregate processing units, memory and storage. The new protocol opens more channels and resources for new types of chips that include SmartNICs, FPGAs and IPUs that may require access to more memory and storage resources in datacenters.

HPC composable systems youre not bound by a box. HPC loves clusters today. And [with CXL 3.0] now you can do coherent clusters and low latency. The growth and flexibility of those nodes is expanding rapidly, Lender said.

The CXL 3.0 protocol can support up to 4,096 nodes, and has a new concept of memory sharing between different nodes. That is an improvement from a static setup in older CXL protocols, where memory could be sliced and attached to different hosts, but could not be shared once allocated.

Now we have sharing where multiple hosts can actually share a segment of memory. Now you can actually look at quick, efficient data movement between hosts if necessary, or if you have an AI-type application that you want to hand data from one CPU or one host to another, Lender said.

The new feature allows peer-to-peer connection between nodes and endpoints in a single domain. That sets up a wall in which traffic can be isolated to move only between nodes connected to each other. That allows for faster accelerator-to-accelerator or device-to-device data transfer, which is key in building out a coherent system.

If you think about some of the applications and then some of the GPUs and different accelerators, they want to pass information quickly, and now they have to go through the CPU. With CXL 3.0, they dont have to go through the CPU this way, but the CPU is coherent, aware of whats going on, Lender said.

The pooling and allocation of memory resources is managed by a software called Fabric Manager. The software can sit anywhere in the system or hosts to control and allocate memory, but it could ultimately impact software developers.

If you get to the tiering level, and when you start getting all the different latencies in the switching, thats where there will have to be some application awareness and tuning of application. I think we certainly have that capability today, Lender said.

It could be two to four years before companies start releasing CXL 3.0 products, and the CPUs will need to be aware of CXL 3.0, Lender said. Intel built in support for CXL 1.1 in its Sapphire Rapids chip, which is expected to start shipping in volume later this year. The CXL 3.0 protocol is backward compatible with the older versions of the interconnect standard.

CXL products based on earlier protocols are slowly trickling into the market. SK Hynix this week introduced its first DDR5 DRAM-based CXL (Compute Express Link) memory samples, and will start manufacturing CXL memory modules in volume next year. Samsung has also introduced CXL DRAM earlier this year.

While products based on CXL 1.1 and 2.0 protocols are on a two-to-three-year product release cycle, CXL 3.0 products could take a little longer as it takes on a more complex computing environment.

CXL 3.0 could actually be a little slower because of some of the Fabric Manager, the software work. Theyre not simple systems when you start getting into fabrics, people are going to want to do proof of concepts and prove out the technology first. Its going to probably be a three-to-four year timeframe, Lender said.

Some companies already started work on CXL 3.0 verification IP six to nine months ago, and are finetuning the tools to the final specification, Bender said.

The CXL has a board meeting in October to discuss the next steps, which could also involve CXL 4.0. The standards organization for PCIe, called the PCI-Special Interest Group, last month announced it was planning PCIe 7.0, which increases the data transfer speed to 128 gigatransfers per second, which is double that of PCIe 6.0.

Lender was cautious about how PCIe 7.0 could potentially fit into a next-generation CXL 4.0. CXL has its own set of I/O, memory and cache protocols.

CXL sits on the electricals of PCIe so I cant commit or absolutely guarantee that [CXL 4.0] will run on 7.0. But thats the intent to use the electricals, Lender said.

Under that case, one of the tenets of CXL 4.0 will be to double the bandwidth by going to PCIe 7.0, but beyond that, everything else will be what we do more fabric or do different tunings, Lender said.

CXL has been on an accelerated pace, with three specification releases since its formation in 2019. There was confusion in the industry on the best high-speed, coherent I/O bus, but the focus has now coagulated around CXL.

Now we have the fabric. There are pieces of Gen-Z and OpenCAPI that arent even in CXL 3.0, so will we incorporate those? Sure, well look at doing that kind of work moving forward, Lender said.

See the original post here:
CXL Brings Datacenter-sized Computing with 3.0 Standard, Thinks Ahead to 4.0 - HPCwire

Ed Husic demands universities reveal Google partnership terms – The Australian Financial Review

The government needs to be funding that kind of research and Im determined to develop a sovereign quantum computing capability here, Mr Husic said.

We dont want this to be like solar technology, where we were pioneers until it went offshore and we lost much of the environmental and economic benefits.

Google Australia confirmed the intellectual property would remain in Australia.

The universities we are working with retain ownership of any intellectual property they create, a Google spokeswoman said.

This funding stems from the commitment Google made last year to support Australias digital future and that is what these collaborations seek to achieve.

Mr Husic has confirmed he plans to direct part of the $1 billion tech investment fund he pledged during the election campaign towards Australias nascent quantum computing industry.

The tech fund is part of the $15 billion National Reconstruction Fund (NRF), a body modelled on the Clean Energy Finance Corporation, which will invest in tech and innovative manufacturing to help drive a post-pandemic economic recovery.

It is an off-budget measure providing investment support through loans, equity investments and loan guarantees for businesses in critical technologies; taking minority shareholder positions in relevant companies, rather than having majority ownership.

The Australian government should be the main investment partner for these frontier technologies, Mr Husic said.

Rather than partnering with overseas firms, we hope to be a part of the profound economic upside offered by quantum computing and Australias growing capacity to develop it.

Googles collaboration with the universities will underpin its long-term goal to develop bigger, more sophisticated quantum algorithms that can be used in applications such as machine learning and artificial intelligence, making quantum computing useful to the companys core business of selling ads.

Australian universities are building a global reputation as developers of quantum computing.

Earlier this year, Silicon Quantum Computing, which was spun out of the University of NSW in 2017 and is hoping to bank $130 million to continue its development of a quantum computer by 2030.

Internationally, quantum computing is part of the trilateral AUKUS partnership between Australia, the US and Britain which has established working groups to hasten the development of quantum technologies, artificial intelligence and undersea capabilities.

Seventeen trilateral working groups have begun work under the AUKUS banner, with nine focused on conventionally armed nuclear-powered submarines, and eight relating to other advanced military capabilities.

See original here:
Ed Husic demands universities reveal Google partnership terms - The Australian Financial Review

Congress Is Giving Billions to the Chip Industry. Strings Are Attached. – The New York Times

Its an embrace of industrial policy not seen in Washington for decades. Gary Hufbauer, a nonresident senior fellow at the Peterson Institute for International Economics who has surveyed U.S. industrial policy, said the bill was the most significant investment in industrial policy that the United States had made in at least 50 years.

Worrying outlook. Amid persistently high inflation, rising consumer pricesand declining spending, the American economy is showing clear signs of slowing down, fueling concerns about a potential recession. Here are other eight measures signaling trouble ahead:

Consumer confidence. In June, the University of Michigans survey of consumer sentimenthit its lowest level in its 70-year history, with nearly half of respondents saying inflation is eroding their standard of living.

The housing market. Demand for real estate has decreased, and construction of new homes is slowing. These trends could continue as interest rates rise, and real estate companies, including Compass and Redfin, have laid off employees in anticipation of a downturn in the housing market.

Copper. A commodity seen by analysts as a measure of sentiment about the global economy because of its widespread use in buildings, cars and other products copper is down more than 20 percent since January, hitting a 17-month low on July 1.

Oil. Crude prices are up this year, in part because of supply constraints resulting from Russias invasion of Ukraine, but they have recently started to waveras investors worry about growth.

The bond market. Long-term interest rates in government bonds have fallen below short-term rates, an unusual occurrence that traders call a yield-curve inversion. It suggests that bond investors are expecting an economic slowdown.

American politicians of both parties have long hailed the economic power of free markets and free trade while emphasizing the dangers and inefficiencies of government interference. Republicans, and some Democrats, argued that the government was a poor arbiter of winners and losers in business, and that its interference in the private market was, at best, wasteful and often destructive.

But Chinas increasing dominance of key global supply chains, like those for rare earth metals, solar panels and certain pharmaceuticals, has generated new support among both Republicans and Democrats for the government to nurture strategic industries. South Korea, Japan, the European Union and other governments have outlined aggressive plans to woo semiconductor factories. And the production of many advanced semiconductors in Taiwan, which is increasingly under risk of invasion, has become for many an untenable security threat.

Semiconductors are necessary to power other key technologies, including quantum computing, the internet of things, artificial intelligence and fighter jets, as well as mundane items like cars, computers and coffee makers.

The question really needs to move from why do we pursue an industrial strategy to how do we pursue one, Brian Deese, the director of the National Economic Council, said in an interview. This will allow us to really shape the rules of where the most cutting-edge innovation happens.

Disruptions in the supply chains for essential goods during the pandemic have added to the sense of urgency to stop American manufacturing from flowing overseas. That includes semiconductors, where the U.S. share of global manufacturing fell to 12 percent in 2020 from 37 percent in 1990, according to the Semiconductor Industry Association. Chinas share of manufacturing rose to 15 percent from almost nothing in the same time period.

Read more here:
Congress Is Giving Billions to the Chip Industry. Strings Are Attached. - The New York Times

How Artificial Intelligence Is Changing The Odds In Online Casino – Intelligent Living

Online casinos are increasing the use of artificial intelligence to enhance a players gambling experience. With internet use rising over the past decade, the online casino market has more than doubled over that period. It is estimated that by 2024, the numbers will have gone over 65 million in the UK alone.

It may seem obvious why the online market has such high growth rates, especially considering the events that occurred in the last couple of years and how they pushed people to use the internet for many services, including retail and banking. In this editorial, we examine the effects of artificial intelligences continued advancement in the casino industry, with a focus on the real money slot game on amazon slots.

Several factors contribute to the online casino market turning into a billion-pound industry. For instance, digital wallets such as PayPal have assisted casinos in appearing more accessible to people who want instant transactions. Similarly, daily and weekly bonuses and promotions have increased registration numbers on online casino sites.

Another example is the features found in the slot games that offer free spins daily and help increase players odds of striking the big jackpots, winning cash prizes, live casino bonuses, and free sportsbook wagers. And, if punters dont get it right the first few times, they have an incentive to keep trying.

So, attributes such as electronic payment options, daily bonuses, and promotions all encourage new and existing players to keep coming back for more. Thus, increasing winning odds for returning players and the industrys market value goes up all the same.

The introduction of AI technology has remarkably affected the online casino industry by transforming it into what it is these days. How is that, you ask? Over the past few years, casino operators and gaming developers have had more access to user information, which has aided them in creating and improving games best suited for their customers.

The application of AI in gathering user data on new and returning players has assisted operators and developers in keeping fresh content that maintains relevance while creating targeted marketing campaigns. AI helps determine which games players engage with the most, how much traffic a casino site receives, and how much wagering takes place on games and sporting events. Advanced bots in gambling provide higher quality customer service in online gambling.

The good news for casino providers is that AI could save a lot on personnel costs. The bad news for those employees is that they would end up losing work to robots, which means an entire industry would take a hit. The overall positive thing about it all, though, is that people are innovative by nature. Perhaps the closing of one old industry might drive an opening for another.

Read more here:
How Artificial Intelligence Is Changing The Odds In Online Casino - Intelligent Living

Artificial intelligence: Top trending companies on Twitter in Q2 2022 – Verdict

Verdict has listed five of the companies that trended the most in Twitter discussions related to artificial intelligence (AI), using research from GlobalDatas Technology Influencer platform.

The top companies are the most mentioned companies among Twitter discussions of more than 629 AI experts tracked by GlobalDatas Technology Influencer platform during the second quarter (Q2) of 2022.

Alphabets Google claiming its new AI models allow for nearly instant weather forecasts, the companys new AI Test Kitchen app helping users explore the potential of conversational AI, and Google Researchs collaboration with New York Stem Cell Foundation (NYSCF) Research Institute scientists to detect cellular signatures of Parkinsons disease, were some of the popular discussions in Q2 2022.

Ronald van Loon, CEO of the Intelligent World, an influencer network that connects businesses and experts to audiences, shared an article on multinational technology conglomerate Alphabets Google, a technology company, stating that its new AI models allow for nearly instantaneous weather forecasts. The increasingly important tool to address climate change, is in its initial stages of development, and is yet to be used in commercial systems, the article detailed. However, a non-peer-reviewed paper published by Googles researchers described how they were able to accurately predict rainfall up to six hours in advance at a one kilometre resolution from just minutes of calculation. Researchers notedthat short-term weather forecasts will be critical in crisis management and to minimise damages to life and property.

Alphabet Inc is the holding company of Google, a global technology company, headquartered in Mountain View, California, the US. The company offers a wide range of products and platforms, including search, maps, calendar, ads, Gmail, Google Play, Android, google cloud, chrome, and YouTube. It also offers online advertising services through its AdSense, internet, TV services, licensing and research and development services, and is involved in investments related to infrastructure, data management, analytics, and AI.

Growing enterprise gaps in AI adoption leading to the popularity of Amazons Amazon Web Services (AWS) SageMaker, and AWS introducing a solution of AI services to manage contact centre workflows, were some of the popular discussions in Q2.

Spiros Margaris, a venture capitalist and board member at the venture capital firm Margaris Ventures, shared an article on AIs growing enterprise gaps leading to ecommerce company Amazons Amazon Web Services (AWS) SageMakers growth. A study revealed that inability of AI to maturity today due to many enterprises lacking a strategy that prioritised security, compliance, fairness, bias, and ethics, the article noted. According to the assessment of enterprise AI adoption, only 26% of organisations have AI projects in production, the same percentage as the previous year. Additionally, 31% of enterprises reported not leveraging AI in their businesses today, up from 13% last year. Only 53% of AI projects make it out of pilot into production, taking on an average eight months or longer to develop scalable models, the article further highlighted.

SageMakers architecture is built to adapt to the changing model building, validating, training, and deployment situations. It integrates across AI services, machine learning (ML) frameworks, and infrastructure in the middle of the AWS ML Stack, the article detailed. As a result, the SageMaker offers greater flexibility in handling training, notebooks, tuning, debugging, and deploying models. In other words, it enables the model interpretability and transparency enterprises require to make AI less risky.

Amazon Web Services Inc is a subsidiary of the online retailer and web service provider Amazon, headquartered in Seattle, Washington, the US. The company offers a range of cloud infrastructure services including compute, storage, databases, analytics, networking, mobile, developer tools, augmented reality (AR) and virtual reality (VR), robotics, game tech, ML, management tools, content delivery, media services, customer engagement, app streaming and security, identity and compliance.

NVIDIAs launch of an AI computing platform for medical devices and computational sensing systems, the companys invention of a new video-to-video synthesis AI model, and its Morpheus AI framework allowing developers to create and scale cybersecurity solutions, were popularly discussed in the second quarter.

Elitsa Krumova, a technology influencer, shared an article on the technology company NVIDIA introducing the Clara Holoscan MGXTM, a platform for the development and deployment of real-time AIapplicationsat the edge for the medical device industry. The platform is specifically created keeping in mind regulatory standards, and is an expansion of the Clara Holoscan platform with the aim of offering a comprehensive, medical-grade reference architecture and long-term software support, to speed up innovation in the medical device industry, the article detailed.

Kimberly Powell, NVIDIAs vice president of healthcare stated thatdeploying real-time AI in healthcare was critical for areas such as drug discovery, diagnostics, and surgery. The platforms combination with AI sped up computing and advanced visualisation, accelerated the productisation of AI, and also deliveredsoftware-as-a-service business models for the industry, the article noted.

Nvidia Corp is a technology company headquartered in Santa Clara, California, the US. The company designs and develops graphics processing units, central processing units, and system-on-a-chip units for gaming, professional visualisation, data centre, and automotive markets. It also offers solutions for AI and data science, data centre and cloud computing, design and visualisation, edge computing, high-performance computing, and self-driving vehicles.

Meta (formerly Facebook) describing how AI will unlock the metaverse, the companys new AI that can discover and refine formulas for increasingly strong, low-carbon concrete, and the company releasing the Mephisto platform for collecting data to train AI models, were some of the popular discussions in the second quarter.

Mario Pawlowski, CEO of iTrucker, a trucking, logistics, and supply chain related company, shared an article on Meta describing how technologies such as AI, AR, VR, 5G, and blockchain will merge to power the metaverse. Jrme Pesenti, leader of Facebook AI, stated that AI will be key to the metaverse and that the role of Meta AI was to advance AI through further research in AI breakthroughs and improving the companys products through them, the article highlighted.

Meta AI is particularly making progress in areas such as embodiment and robotics, creativity and self-supervised learning, where AI can learn the data without human intervention, Pesenti added.

Facebook Inc (now Meta) is a technology company headquartered in Menlo Park, California, the US. The company is a provider of social networking, advertising, and business insight solutions. Through its virtual-reality vision, the metaverse, Meta is focusing on developing a virtual environment that allows people to interact and connect with technology. Some of its major products include Facebook, Instagram, Oculus, Messenger, and WhatsApp.

International Business Machines Corp (IBM) rolling out more AI drive-thru McDonalds chatbots, the company making its cancer-fights AI projects open source, and the companys Non-Von Neumann AI hardware breakthrough in neuromorphic computing, were some of the popular discussions in Q2.

Evan Kirstel, chief digital evangelist and co-founder of the marketing firm eViRa Health, shared an article on the technology company IBM adding its natural language processing (NLP) software to several McDonalds drive-thrus. This came right after the company bought the automated order technology unit from the fast-food chain, as well as the team that built it, the article noted. Automated ordering had been piloted across ten McShacks in Chicago in June 2021, with humans not required to interfere in circa four out of every five orders made with the AI drive-thru bots.

Rob Thomas, senior vice president of global markets at IBM stated that the fast-food company had been struggling with ordering, and that IBNs MLP technology could augment McDonalds technology and service in a time of wage inflation and the need for quick service restaurants, the article highlighted.

IBM is a technology company headquartered in Armonk, New York, the US. The company creates and sells system hardware and software, and offers infrastructure, hosting, and consulting services. Its technology-based product line includes analytics, AI, automation, blockchain, cloud computing, IT infrastructure, IT management, cybersecurity, and software development products. The company also offers a range of services including cloud, networking, security, technology consulting, application services, business resiliency services, and technology support services.

Here is the original post:
Artificial intelligence: Top trending companies on Twitter in Q2 2022 - Verdict

Artificial intelligence isn’t that intelligent | The Strategist – The Strategist

Late last month, Australias leading scientists, researchers and businesspeople came together for the inaugural Australian Defence Science, Technology and Research Summit (ADSTAR), hosted by the Defence Departments Science and Technology Group. In a demonstration of Australias commitment to partnerships that would make our non-allied adversaries flinch, Chief Defence Scientist Tanya Monro was joined by representatives from each of the Five Eyes partners, as well as Japan, Singapore and South Korea. Two streams focusing on artificial intelligence were dedicated to research and applications in the defence context.

At the end of the day, isnt hacking an AI a bit like social engineering?

A friend who works in cybersecurity asked me this. In the world of information security, social engineering is the game of manipulating people into divulging information that can be used in a cyberattack or scam. Cyber experts may therefore be excused for assuming that AI might display some human-like level of intelligence that makes it difficult to hack.

Unfortunately, its not. Its actually very easy.

The man who coined the term artificial intelligence in the 1950s, cybernetics researcher John McCarthy, also said that once we know how it works, it isnt called AI anymore. This explains why AI means different things to different people. It also explains why trust in and assurance of AI is so challenging.

AI is not some all-powerful capability that, despite how much it can mimic humans, also thinks like humans. Most implementations, specifically machine-learning models, are just very complicated implementations of the statistical methods were familiar with from high school. It doesnt make them smart, merely complex and opaque. This leads to problems in AI safety and security.

Bias in AI has long been known to cause problems. For example, AI-driven recruitment systems in tech companies have been shown to filter out applications from women, and re-offence prediction systems in US prisons exhibit consistent biases against black inmates. Fortunately, bias and fairness concerns in AI are now well known and actively investigated by researchers, practitioners and policymakers.

AI security is different, however. While AI safety deals with the impact of the decisions an AI might make, AI security looks at the inherent characteristics of a model and whether it could be exploited. AI systems are vulnerable to attackers and adversaries just as cyber systems are.

A known challenge is adversarial machine learning, where adversarial perturbations added to an image cause a model to predictably misclassify it.

When researchers added adversarial noise imperceptible to humans to an image of a panda, the model predicted it was a gibbon.

In another study, a 3D-printed turtle had adversarial perturbations embedded in its surface so that an object-detection model believed it to be a rifle. This was true even when the object was rotated.

I cant help but notice disturbing similarities between the rapid adoption of and misplaced trust in the internet in the latter half of the last century and the unfettered adoption of AI now.

It was a sobering moment when, in 2018, the then US director of national intelligence, Daniel Coats, called out cyber as the greatest strategic threat to the US.

Many nations are publishing AI strategies (including Australia, the US and the UK) that address these concerns, and theres still time to apply the lessons learned from cyber to AI. These include investment in AI safety and security at the same pace as investment in AI adoption is made; commercial solutions for AI security, assurance and audit; legislation for AI safety and security requirements, as is done for cyber; and greater understanding of AI and its limitations, as well as the technologies, like machine learning, that underpin it.

Cybersecurity incidents have also driven home the necessity for the public and private sectors to work together not just to define standards, but to reach them together. This is essential both domestically and internationally.

Autonomous drone swarms, undetectable insect-sized robots and targeted surveillance based on facial recognition are all technologies that exist. While Australia and our allies adhere to ethical standards for AI use, our adversaries may not.

Speaking on resilience at ADSTAR, Chief Scientist Cathy Foley discussed how pre-empting and planning for setbacks is far more strategic than simply ensuring you can get back up after one. That couldnt be more true when it comes to AI, especially given Defences unique risk profile and the current geostrategic environment.

I read recently that Ukraine is using AI-enabled drones to target and strike Russians. Notwithstanding the ethical issues this poses, the article I read was written in Polish and translated to English for me by Googles language translation AI. Artificial intelligence is already pervasive in our lives. Now we need to be able to trust it.

More here:
Artificial intelligence isn't that intelligent | The Strategist - The Strategist

FedEx to expand robotics technology and AI – CBS19.tv KYTX

MEMPHIS, Tenn. Memphis-based FedEx will be using more robots and artificial intelligence.

This comes after the company announced an expanded relationship with Berkshire Grey, a Massachusetts-based company that develops robotics technology and AI software for logistics businesses.

Berkshire Grey's CEO said the new agreement will help with supply chain issues and ease the physical burden on employees.

Berkshire Grey and FedEx are strategically aligned. These new agreements reflect our mutual commitment to innovations in robotic automation that can remove barriers within the supply chain, ease the physical burden on employees and streamline operations, said Tom Wagner, CEO of Berkshire Grey. We look forward to working together on this new program and to advancing other automation programs with FedEx moving forward.

The company has also worked with Walmart and Target in the past with technology to compete with Amazon.

Read the original post:
FedEx to expand robotics technology and AI - CBS19.tv KYTX

Artificial Intelligence Regulation Updates: China, EU, and U.S – The National Law Review

Wednesday, August 3, 2022

Artificial Intelligence (AI) systems are poised to drastically alter the way businesses and governments operate on a global scale, with significant changes already under way. This technology has manifested itself in multiple forms including natural language processing, machine learning, and autonomous systems, but with the proper inputs can be leveraged to make predictions, recommendations, and even decisions.

Accordingly,enterprises are increasingly embracing this dynamic technology. A2022 global study by IBMfound that 77% of companies are either currently using AI or exploring AI for future use, creating value by increasing productivity through automation, improved decision-making, and enhanced customer experience. Further, according to a2021 PwC studythe COVID-19 pandemic increased the pace of AI adoption for 52% of companies as they sought to mitigate the crises impact on workforce planning, supply chain resilience, and demand projection.

For these many businesses investing significant resources into AI, it is critical to understand the current and proposed legal frameworks regulating this novel technology. Specifically for businesses operating globally, the task of ensuring that their AI technology complies with applicable regulations will be complicated by the differing standards that are emerging from China, the European Union (EU), and the U.S.

China has taken the lead in moving AI regulations past the proposal stage. In March 2022, China passed aregulationgoverning companies use of algorithms in online recommendation systems, requiring that such services are moral, ethical, accountable, transparent, and disseminate positive energy. The regulation mandates companies notify users when an AI algorithm is playing a role in determining which information to display to them and give users the option to opt out of being targeted. Additionally, the regulation prohibits algorithms that use personal data to offer different prices to consumers. We expect these themes to manifest themselves in AI regulations throughout the world as they develop.

Meanwhile in the EU, the European Commission has published an overarchingregulatory framework proposaltitled the Artificial Intelligence Act which would have a much broader scope than Chinas enacted regulation. The proposal focuses on the risks created by AI, with applications sorted into categories of minimal risk, limited risk, high risk, or unacceptable risk. Depending on an applications designated risk level, there will be corresponding government action or obligations. So far, the proposed obligations focus on enhancing the security, transparency, and accountability of AI applications through human oversight and ongoing monitoring. Specifically, companies will be required to register stand-alone high-risk AI systems, such as remote biometric identification systems, in an EU database. If the proposed regulation is passed, the earliest date for compliance would be the second half of 2024 with potential fines for noncompliance ranging from 2-6% of a companys annual revenue.

Additionally, the previously enacted EU General Data Protection Regulation (GDPR) already carries implications for AI technology.Article 22prohibits decisions based on solely automated processes that produce legal consequences or similar effects for individuals unless the program gains the users explicit consent or meets other requirements.

In the United States there has been a fragmented approach to AI regulation thus far, with states enacting their own patchwork AI laws. Many of the enacted regulations focus on establishing various commissions to determine how state agencies can utilize AI technology and to study AIs potential impacts on the workforce and consumers. Common pending state initiatives go a step further and would regulate AI systems accountability and transparency when they process and make decisions based on consumer data.

On a national level, the U.S. Congress enacted theNational AI Initiative Actin January 2021, creating theNational AI Initiativethat provides an overarching framework to strengthen and coordinate AI research, development, demonstration, and education activities across all U.S. Departments and Agencies . . . . The Act created new offices and task forces aimed at implementing a national AI strategy, implicating a multitude of U.S. administrative agencies including the Federal Trade Commission (FTC), Department of Defense, Department of Agriculture, Department of Education, and the Department of Health and Human Services.

Pending national legislation includes theAlgorithmic Accountability Act of 2022, which was introduced in both houses of Congress in February 2022. In response to reports that AI systems can lead to biased and discriminatory outcomes, the proposed Act would direct the FTC to create regulations that mandate covered entities, including businesses meeting certain criteria, to perform impact assessments when using automated decision-making processes. This would specifically include those derived from AI or machine learning.

While the FTC has not promulgated AI-specific regulations, this technology is on the agencys radar. In April 2021 the FTC issued amemowhich apprised companies that using AI that produces discriminatory outcomes equates to a violation of Section 5 of the FTC Act, which prohibits unfair or deceptive practices. And the FTC may soon take this warning a step fartherin June 2022 theagency indicatedthat it will submit an Advanced Notice of Preliminary Rulemaking to ensure that algorithmic decision-making does not result in harmful discrimination with the public comment period ending in August 2022. The FTC also recently issued areportto Congress discussing how AI may be used to combat online harms, ranging from scams, deep fakes, and opioid sales, but advised against over-reliance on these tools, citing the technologys susceptibility to producing inaccurate, biased, and discriminatory outcomes.

Companies should carefully discern whether other non-AI specific regulations could subject them to potential liability for their use of AI technology. For example, the U.S. Equal Employment Opportunity Commission (EEOC) put forthguidancein May 2022 warning companies that their use of algorithmic decision-making tools to assess job applicants and employees could violate the Americans with Disabilities Act by, in part, intentionally or unintentionally screening out individuals with disabilities. Further analysis of the EEOCs guidance can be foundhere.

Many other U.S. agencies and offices are beginning to delve into the fray of AI. In November 2021, the White House Office of Science and Technology Policysolicited engagementfrom stakeholders across industries in an effort to develop a Bill of Rights for an Automated Society. Such a Bill of Rights could cover topics like AIs role in the criminal justice system, equal opportunities, consumer rights, and the healthcare system. Additionally, the National Institute of Standards and Technology (NIST), which falls under the U.S. Department of Commerce, is engaging with stakeholders todevelopa voluntary risk management framework for trustworthy AI systems. The output of this project may be analogous to the EUs proposed regulatory framework, but in a voluntary format.

The overall theme of enacted and pending AI regulations globally is maintaining the accountability, transparency, and fairness of AI. For companies leveraging AI technology, ensuring that their systems remain compliant with the various regulations intended to achieve these goals could be difficult and costly. Two aspects of AIs decision-making process make oversight particularly demanding:

Opaquenesswhere users can control data inputs and view outputs, but are often unable to explain how and with which data points the system made a decision.

Frequent adaptationwhere processes evolve over time as the system learns.

Therefore, it is important for regulators to avoid overburdening businesses to ensure that stakeholders may still leverage AI technologies great benefits in a cost-effective manner. The U.S. has the opportunity to observe the outcomes of the current regulatory action from China and the EU to determine whether their approaches strike a favorable balance. However, the U.S. should potentially accelerate its promulgation of similar laws so that it can play a role in setting the global tone for AI regulatory standards.

Thank you to co-author Lara Coole, a summer associate in Foley & Lardners Jacksonville office, for her contributions to this post.

Here is the original post:
Artificial Intelligence Regulation Updates: China, EU, and U.S - The National Law Review