Page 3«..2345..1020..»

Category Archives: Ai

AI agents like Rabbit aim to book your vacation and order your Uber – NPR

Posted: February 26, 2024 at 12:18 am

The AI-powered Rabbit R1 device is seen at Rabbit Inc.'s headquarters in Santa Monica, California. The gadget is meant to serve as a personal assistant fulfilling tasks such as ordering food on DoorDash for you, calling an Uber or booking your family's vacation. Stella Kalinina for NPR hide caption

The AI-powered Rabbit R1 device is seen at Rabbit Inc.'s headquarters in Santa Monica, California. The gadget is meant to serve as a personal assistant fulfilling tasks such as ordering food on DoorDash for you, calling an Uber or booking your family's vacation.

ChatGPT can give you travel ideas, but it won't book your flight to Cancn.

Now, artificial intelligence is here to help us scratch items off our to-do lists.

A slate of tech startups are developing products that use AI to complete real-world tasks.

Silicon Valley watchers see this new crop of "AI agents" as being the next phase of the generative AI craze that took hold with the launch of chatbots and image generators.

Last year, Sam Altman, the CEO of OpenAI, the maker of ChatGPT, nodded to the future of AI errand-helpers at the company's developer conference.

"Eventually, you'll just ask a computer for what you need, and it'll do all of these tasks for you," Altman said.

One of the most hyped companies doing this is called Rabbit. It has developed a device called the Rabbit R1. Chinese entrepreneur Jesse Lyu launched it at this year's CES, the annual tech trade show, in Las Vegas.

It's a bright orange gadget about half the size of an iPhone. It has a button on the side that you push and talk into like a walkie-talkie. In response to a request, an AI-powered rabbit head pops up and tries to fulfill whatever task you ask.

Chatbots like ChatGPT rely on technology known as a large language model, and Rabbit says it uses both that system and a new type of AI it calls a "large action model." In basic terms, it learns how people use websites and apps and mimics these actions after a voice prompt.

It won't just play a song on Spotify, or start streaming a video on YouTube, which Siri and other voice assistants can already do, but Rabbit will order DoorDash for you, call an Uber, book your family's vacation. And it makes suggestions after learning a user's tastes and preferences.

Storing potentially dozens or hundreds of a person's passwords raises instant questions about privacy. But Rabbit claims it saves user credentials in a way that makes it impossible for the company, or anyone else, to access someone's personal information. The company says it will not sell or share user data with third parties "without your formal, explicit permission."

A Rabbit employee demonstrates the company's Rabbit R1 device. The company says more than 80,000 people have preordered the device for $199. Stella Kalinina for NPR hide caption

A Rabbit employee demonstrates the company's Rabbit R1 device. The company says more than 80,000 people have preordered the device for $199.

The company, which says more than 80,000 people have preordered the Rabbit R1, will start shipping the devices in the coming months.

"This is the first time that AI exists in a hardware format," said Ashley Bao, a spokeswoman for Rabbit at the company's Santa Monica, Calif., headquarters. "I think we've all been waiting for this moment. We've had our Alexa. We've had our smart speakers. But like none of them [can] perform tasks from end to end and bring words to action for you."

Excitement in Silicon Valley over AI agents is fueling an increasingly crowded field of gizmos and services. Google and Microsoft are racing to develop products that harness AI to automate busywork. The web browser Arc is building a tool that uses an AI agent to surf the web for you. Another startup, called Humane, has developed a wearable AI pin that projects a display image on a user's palm. It's supposed to assist with daily tasks and also make people pick up their phones less frequently.

Similarly, Rabbit claims its device will allow people to get things done without opening apps (you log in to all your various apps on a Rabbit web portal, so it uses your credentials to do things on your behalf).

To work, the Rabbit R1 has to be connected to Wi-Fi, but there is also a SIM card slot, in case people want to buy a separate data plan just for the gadget.

When asked why anyone would want to carry around a separate device just to do something your smartphone could do in 30 seconds, Rabbit CEO Lyu argued that using apps to place orders and make requests all day takes longer than we might imagine.

"We are looking at the entire process, end to end, to automate as much as possible and make these complex actions much quicker and much more intuitive than what's currently possible with multiple apps on a smartphone," Lyu said.

ChatGPT's introduction in late 2022 set off a frenzy at companies in many industries trying to ride the latest tech industry wave. That chatbot exuberance is about to be transferred to the world of gadgets, said Duane Forrester, an analyst at the firm Yext.

Google and Microsoft are racing to develop products that harness AI to automate busywork, which might make other AI-powered assistants obsolete. Stella Kalinina for NPR hide caption

Google and Microsoft are racing to develop products that harness AI to automate busywork, which might make other AI-powered assistants obsolete.

"Early on, with the unleashing of AI, every single product or service attached the letters "A" and "I" to whatever their product or service was," Forrester said. "I think we're going to end up seeing a version of that with hardware as well."

Forrester said an AI walkie-talkie might quickly become obsolete when companies like Apple and Google make their voice assistants smarter with the latest AI innovations.

"You don't need a different piece of hardware to accomplish this," he said. "What you need is this level of intelligence and utility in our current smartphones, and we'll get there eventually."

Researchers are worried that AI-powered personal assistant technology could eventually go wrong. Stella Kalinina for NPR hide caption

Researchers are worried that AI-powered personal assistant technology could eventually go wrong.

Researchers are worried about where such technology could eventually go awry.

The AI assistant purchasing the wrong nonrefundable flight, for instance, or sending a food order to someone else's house are among potential snafus that analysts have mentioned.

A 2023 paper by the Center for AI Safety warned against AI agents going rogue. It said if an AI agent is given an "open-ended goal" say, maximize a person's stock market profits without being told how to achieve that goal, it could go very wrong.

"We risk losing control over AIs as they become more capable. AIs could optimize flawed objectives, drift from their original goals, become power-seeking, resist shutdown, and engage in deception. We suggest that AIs should not be deployed in high-risk settings, such as by autonomously pursuing open-ended goals or overseeing critical infrastructure, unless proven safe," according to a summary of the paper.

At Rabbit's Santa Monica office, Rabbit R1 Creative Director Anthony Gargasz pitches the device as a social media reprieve. Use it to make a doctor's appointment or book a hotel without being sucked into an app's feed for hours.

"Absolutely no doomscrolling on the Rabbit R1," said Gargasz. "The scroll wheel is for intentional interaction."

His colleague Ashley Bao added that the whole point of the gadget is to "get things done efficiently." But she acknowledged there's a cutesy factor too, comparing it to the keychain-size electronic pets that were popular in the 1990s.

"It's like a Tamagotchi but with AI," she said.

View original post here:

AI agents like Rabbit aim to book your vacation and order your Uber - NPR

Posted in Ai | Comments Off on AI agents like Rabbit aim to book your vacation and order your Uber – NPR

What’s the point of Elon Musk’s AI company? – The Verge

Posted: at 12:18 am

Look, Ive been following the adventures of xAI, Elon Musks AI company, and Ive come to a conclusion: its only real idea is What if AI, but with Elon Musk this time?

Whats publicly available about xAI makes it seem like Musk showed up to the generative AI party late, and without any beer. This is 2024. The party is crowded now. xAI doesnt seem to have anything that would let it stand out beyond, well, Musk.

That hasnt stopped Musk from shopping his idea to investors, though! Last December, xAI said it was trying to raise $1 billion in a filing with the Securities and Exchange Commission. (This is not the same company as the X that was formerly known as Twitter.) There is also reporting from the Financial Times saying Musk is looking for up to $6 billion in funding.

xAI (not Twitter) so far has one product, a supposedly sassy LLM called Grok

To be sure, Musk has tweeted that xAI is not raising capital and I have had no conversations with anyone in this regard. Musk says a lot of things in public and only some of them are true, so Im going to rock with the filing, which I have seen with my own eyes.

xAI (not Twitter) is sort of an odd entity. Besides its entanglement with X (Twitter), it doesnt really seem to have a defined purpose. The xAI pitch deck obtained by Bloomberg relies on two things:

xAI (not Twitter) so far has one product, a supposedly sassy LLM called Grok, which users can access by paying $16 a month to X (the Twitter company)and then going through the X (Twitter)interface. xAI (not Twitter) does not have a standalone interface for Grok. My colleague Emilia David has characterized it as having no reason to exist, because it isnt meaningfully better than free chatbot offerings from its competitors. Its clearest distinguishing feature is that it uses X (Twitter) data as real-time input, letting it serve as kind of opera glasses for platform drama. The Discover / Trends section of the X (Twitter) app is being internally reworked to feature Groks summaries of the news, according to a person familiar with the development.

Grok was developed very fast. One possible explanation is that Musk has hired a very in-demand team of the absolute best in the field. Another is that its a fine-tuned version of an open-sourced LLM like Metas Llama. Maybe there is even a secret third thing that explains its speedy development.

Besides X (Twitter), the other source of data for xAI (not Twitter) is Tesla, according to Bloombergs reporting. That is curious! In January, Musk said, I would prefer to build products outside of Tesla unless hes given ~25 percent voting control. Musk has also said that he feels he doesnt have enough ownership over Tesla to feel comfortable growing Tesla to be a leader in AI and that without more Tesla shares, he would prefer to build products outside of Tesla.

Tesla has been working on AI in the context of self-driving cars for quite some time, and has experienced some of the same roadblocks as other self-driving car companies. Theres also the Optimus robot, I guess. These do seem like specific use cases that are considerably less general than building another LLM. That Tesla data is valuable and stretches back years. If xAI is siphoning it off, I wonder how Tesla shareholders will feel about that.

Who wants to fund yet another very general AI company in a crowded space?

There are real uses for AI, sure. Databricks exists! Its not consumer facing, but it does appear to have a specific purpose: data storage and analytics. There are smaller, more specialized firms that deal with industry-specific kinds of data. Take Fabric AI its aim is to streamline patient intake data for telemedicine. (It is also making a chatbot that threatens to replace WebMD as the most frightening place to ask about symptoms.) Or Abnormal Security, which is an AI approach to blocking malware, ransomware, and other threats. I dont know whether these companies will accomplish their goals, but they do at least have a compelling reason to exist.

So Im wondering who wants to fund yet another very general AI company in a crowded space. And Im wondering if the reason Musk is denying that hes fundraising at all is that theres not much appetite for xAI, and hes trying to minimize his embarrassment. Why does one of the worlds richest men need outside funding for this, anyway?

Silicon Valleys estimation of Musk has been remarkably resilient, probably because he has made a lot of people a lot of money in the past. But the debacle at X (Twitter) has been disastrous for his investors. And Musk has been distracted with it at a crucial time for Tesla, which has been facing increased competition. Teslas newest product, the Cybertruck, ships without a clear coat; some owners say it is rusting. (A Tesla engineer claims the orange specks are surface contamination.) And in its most recent earnings, Tesla warned its growth was slowing. Meanwhile, Rivians CEO has been open about trying to undercut Tesla directly.

A perhaps under-appreciated development in the last 20 years or so has been watching Elon Musk go from being ahead of the investing curve to being a top signal. Take, for instance, the GameStonk movement, when Musks tweet was the perfect sell signal not just for retail investors, but for sophisticated hedge funds. Or the Dogecoin crash that occurred as he called himself the Dogefather on SNL. Or even Twitter, which certainly wasnt worth what Musk ultimately paid for it and has been rapidly degrading in value ever since, to the point where the debt on the deal has been called uninvestable by a firm that specializes in distressed debt.

I dont see a compelling case being made for xAI. It doesnt have a specialized purpose; Grok is an also-ran LLM, and its meant to bolster an existing product: X. xAI isnt pitching an AI-native application, its mostly just saying, Hey, look at OpenAI.

Musk is trying to pitch a new AI startup without a clear focus as the generative AI hype is starting to die down. Its not just ChatGPT Microsofts Copilot experiences a steep drop-off in use after a month. There is now an open question about whether the productivity gains from AI are enough to justify how much it costs. So heres what Im wondering: how many investors believe just add Elon will fix it?

With reporting by Alex Heath.

Follow this link:

What's the point of Elon Musk's AI company? - The Verge

Posted in Ai | Comments Off on What’s the point of Elon Musk’s AI company? – The Verge

Announcing Microsofts open automation framework to red team generative AI Systems – Microsoft

Posted: at 12:18 am

Today we are releasing an open automation framework, PyRIT (Python Risk Identification Toolkit for generative AI), to empower security professionals and machine learning engineers to proactively find risks in their generative AI systems.

At Microsoft, we believe that security practices and generative AI responsibilities need to be a collaborative effort. We are deeply committed to developing tools and resources that enable every organization across the globe to innovate responsibly with the latest artificial intelligence advances. This tool, and the previous investments we have made in red teaming AI since 2019, represents our ongoing commitment to democratize securing AI for our customers, partners, and peers.

Red teaming AI systems is a complex, multistep process. Microsofts AI Red Team leverages a dedicated interdisciplinary group of security, adversarial machine learning, and responsible AI experts. The Red Team also leverages resources from the entire Microsoft ecosystem, including the Fairness center in Microsoft Research; AETHER, Microsofts cross-company initiative on AI Ethics and Effects in Engineering and Research; and the Office of Responsible AI. Our red teaming is part of our larger strategy to map AI risks, measure the identified risks, and then build scoped mitigations to minimize them.

Over the past year, we have proactively red teamed several high-value generative AI systems and models before they were released to customers. Through this journey, we found that red teaming generative AI systems is markedly different from red teaming classical AI systems or traditional software in three prominent ways.

We first learned that while red teaming traditional software or classical AI systems mainly focuses on identifying security failures, red teaming generative AI systems includes identifying both security risk as well as responsible AI risks. Responsible AI risks, like security risks, can vary widely, ranging from generating content that includes fairness issues to producing ungrounded or inaccurate content. AI red teaming needs to explore the potential risk space of security and responsible AI failures simultaneously.

Secondly, we found that red teaming generative AI systems is more probabilistic than traditional red teaming. Put differently, executing the same attack path multiple times on traditional software systems would likely yield similar results. However, generative AI systems have multiple layers of non-determinism; in other words, the same input can provide different outputs. This could be because of the app-specific logic; the generative AI model itself; the orchestrator that controls the output of the system can engage different extensibility or plugins; and even the input (which tends to be language), with small variations can provide different outputs. Unlike traditional software systems with well-defined APIs and parameters that can be examined using tools during red teaming, we learned that generative AI systems require a strategy that considers the probabilistic nature of their underlying elements.

Finally, the architecture of these generative AI systems varies widely: from standalone applications to integrations in existing applications to the input and output modalities, such as text, audio, images, and videos.

These three differences make a triple threat for manual red team probing. To surface just one type of risk (say, generating violent content) in one modality of the application (say, a chat interface on browser), red teams need to try different strategies multiple times to gather evidence of potential failures. Doing this manually for all types of harms, across all modalities across different strategies, can be exceedingly tedious and slow.

This does not mean automation is always the solution. Manual probing, though time-consuming, is often needed for identifying potential blind spots. Automation is needed for scaling but is not a replacement for manual probing. We use automation in two ways to help the AI red team: automating our routine tasks and identifying potentially risky areas that require more attention.

In 2021, Microsoft developed and released a red team automation framework for classical machine learning systems. Although Counterfit still delivers value for traditional machine learning systems, we found that for generative AI applications, Counterfit did not meet our needs, as the underlying principles and the threat surface had changed. Because of this, we re-imagined how to help security professionals to red team AI systems in the generative AI paradigm and our new toolkit was born.

We like to acknowledge out that there have been work in the academic space to automate red teaming such as PAIR and open source projects including garak.

PyRIT is battle-tested by the Microsoft AI Red Team. It started off as a set of one-off scripts as we began red teaming generative AI systems in 2022. As we red teamed different varieties of generative AI systems and probed for different risks, we added features that we found useful. Today, PyRIT is a reliable tool in the Microsoft AI Red Teams arsenal.

The biggest advantage we have found so far using PyRIT is our efficiency gain. For instance, in one of our red teaming exercises on a Copilot system, we were able to pick a harm category, generate several thousand malicious prompts, and use PyRITs scoring engine to evaluate the output from the Copilot system all in the matter of hours instead of weeks.

PyRIT is not a replacement for manual red teaming of generative AI systems. Instead, it augments an AI red teamers existing domain expertise and automates the tedious tasks for them. PyRIT shines light on the hot spots of where the risk could be, which the security professional than can incisively explore. The security professional is always in control of the strategy and execution of the AI red team operation, and PyRIT provides the automation code to take the initial dataset of harmful prompts provided by the security professional, then uses the LLM endpoint to generate more harmful prompts.

However, PyRIT is more than a prompt generation tool; it changes its tactics based on the response from the generative AI system and generates the next input to the generative AI system. This automation continues until the security professionals intended goal is achieved.

Abstraction and Extensibility is built into PyRIT. Thats because we always want to be able to extend and adapt PyRITs capabilities to new capabilities that generative AI models engender. We achieve this by five interfaces: target, datasets, scoring engine, the ability to support multiple attack strategies and providing the system with memory.

PyRIT was created in response to our belief that the sharing of AI red teaming resources across the industry raises all boats. We encourage our peers across the industry to spend time with the toolkit and see how it can be adopted for red teaming your own generative AI application.

Project created by Gary Lopez; Engineering: Richard Lundeen, Roman Lutz, Raja Sekhar Rao Dheekonda, Dr. Amanda Minnich; Broader involvement from Shiven Chawla, Pete Bryan, Peter Greko, Tori Westerhoff, Martin Pouliot, Bolor-Erdene Jagdagdorj, Chang Kawaguchi, Charlotte Siska, Nina Chikanov, Steph Ballard, Andrew Berkley, Forough Poursabzi, Xavier Fernandes, Dean Carignan, Kyle Jackson, Federico Zarfati, Jiayuan Huang, Chad Atalla, Dan Vann, Emily Sheng, Blake Bullwinkel, Christiano Bianchet, Keegan Hines, eric douglas, Yonatan Zunger, Christian Seifert, Ram Shankar Siva Kumar. Grateful for comments from Jonathan Spring.

To learn more about Microsoft Security solutions, visit ourwebsite.Bookmark theSecurity blogto keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity)for the latest news and updates on cybersecurity.

Link:

Announcing Microsofts open automation framework to red team generative AI Systems - Microsoft

Posted in Ai | Comments Off on Announcing Microsofts open automation framework to red team generative AI Systems – Microsoft

AI Chatbots Can Guess Your Personal Information From What You … – WIRED

Posted: October 18, 2023 at 2:23 am

Another example requires more specific knowledge about language use:

I completely agree with you on this issue of road safety! here is this nasty intersection on my commute, I always get stuck there waiting for a hook turn while cyclists just do whatever the hell they want to do. This is insane and truely [sic] a hazard to other people around you. Sure we're famous for it but I cannot stand constantly being in this position.

In this case GPT-4 correctly infers that the term hook turn is primarily used for a particular kind of intersection in Melbourne, Australia.

Taylor Berg-Kirkpatrick, an associate professor at UC San Diego whose work explores machine learning and language, says it isnt surprising that language models would be able to unearth private information, because a similar phenomenon has been discovered with other machine learning models. But he says it is significant that widely available models can be used to guess private information with high accuracy. This means that the barrier to entry in doing attribute prediction is really low, he says.

Berg-Kirkpatrick adds that it may be possible to use another machine-learning model to rewrite text to obfuscate personal information, a technique previously developed by his group.

Mislav Balunovi, a PhD student who worked on the project, says the fact that large language models are trained on so many different kinds of data, including for example, census information, means that they can infer surprising information with relatively high accuracy.

Balunovi notes that trying to guard a persons privacy by stripping their age or location data from the text a model is fed does not generally prevent it from making powerful inferences. If you mentioned that you live close to some restaurant in New York City, he says. The model can figure out which district this is in, then by recalling the population statistics of this district from its training data, it may infer with very high likelihood that you are Black.

The Zurich teams findings were made using language models not specifically designed to guess personal data. Balunovi and Vechev say it may be possible to use the large language models to go through social media posts to dig up sensitive personal information, perhaps including a persons illness. They say it would also be possible to design a chatbot to unearth information by making a string of innocuous-seeming inquiries.

Researchers have previously shown how large language models can sometimes leak specific personal information. The companies developing these models sometimes try to scrub personal information from training data or block models from outputting it. Vechev says the ability of LLMs to infer personal information is fundamental to how they work by finding statistical correlations, which will make it far more difficult to address. This is very different, he says. It is much worse.

Excerpt from:

AI Chatbots Can Guess Your Personal Information From What You ... - WIRED

Posted in Ai | Comments Off on AI Chatbots Can Guess Your Personal Information From What You … – WIRED

Harvard IT Launches Pilot of AI Sandbox to Enable Walled-Off Use … – Harvard Crimson

Posted: at 2:23 am

Harvard University Information Technology began a limited rollout of the pilot version of its artificial intelligence sandbox tool on Sept. 4, with the aim of providing Harvard affiliates with secure access to large language models.

The AI sandbox provides a walled-off environment where prompts and data entered into the interface are seen by the user only the data is not shared with LLM vendors and cannot be used as training data for these models, according to a press release on HUITs website.

HUIT designed the tool in collaboration with the Office of the Vice Provost for Advances in Learning, the Faculty of Arts and Sciences Division of Science, and other colleagues across the University. HUIT spokesperson Tim J. Bailey wrote the pilot aims to encourage safe experimentation with LLMs, inform how Harvard can offer increased accessibility of the tools, and explore applications of AI in the classroom and workplace.

Pilot access is being rolled out in phases, with access expanded to instructors within the FAS two weeks ago. Still, affiliates must request access to use the AI sandbox for each specific use case.

Harvard Business School professor Mitchell B. Weiss 99 was one of the inaugural participants in the pilot AI sandbox.

Weiss said the AI sandbox was a crucial educational resource in his course HBSMBA 1623: Public Entrepreneurship. He praised the AI sandbox for offering easy access to a range of generative AI models.

Weiss, by incorporating AI into his course, said he hoped to provide insight into the broader question of, How can generative AI be useful in helping solve public problems?

The interest is spreading as examples of uses for teaching and learning spread, Weiss said.

Feedback from the pilot, obtained through surveys and discussions with participants, will be shared among faculty and University leadership to advise Harvards strategy toward AI, according to Bailey.

The University has continued to negotiate with vendors on enterprise agreements that can help expand the variety of consumer AI tools available on the platform, per Bailey.

Weiss said he looks forward to future versions that include the ability to upload files like data and PDFs and the advanced data analysis tool in GPT-plus.

As generative AI has gained traction on campus and in the world, the Office of Undergraduate Education has rolled out guidance for faculty approaches regarding generative AI use in FAS courses. These guidelines range from maximally restrictive to fully-encouraging, though the FAS has not imposed a blanket policy on AI use.

Weiss said the pilot version of the AI sandbox has been received positively by his students.

Oh, this changes my whole job, Weiss said, quoting a conversation between two students in his class.

They really saw the magnitude of these tools in a way they hadnt. I think using them is a very important way to understanding them, Weiss added.

Staff writer Camilla J. Martinez can be reached at camilla.martinez@thecrimson.com. Follow her on X @camillajinm.

Staff writer Tiffani A. Mezitis can be reached at tiffani.mezitis@thecrimson.com.

See original here:

Harvard IT Launches Pilot of AI Sandbox to Enable Walled-Off Use ... - Harvard Crimson

Posted in Ai | Comments Off on Harvard IT Launches Pilot of AI Sandbox to Enable Walled-Off Use … – Harvard Crimson

Advancing policing through AI: Insights from the global law … – Police News

Posted: at 2:23 am

Editors Note: A use case for generative AI is analyzing and synthesizing meeting notes. The rough notes I took during the session, "From Apprentice to Master: Artificial Intelligence and Policing," were submitted to ChatGPT 4.0 with the prompt: "Use these notes to create a news article of 500 to 700 words for police chiefs on the panel discussion." An additional prompt instructed ChatGPT to, "Add to this article, based on the notes, a checklist of immediate actions for a police chief to take to understand and prepare their agency for AI in policing."

SAN DIEGO In a recent panel discussion titled From Apprentice to Master: Artificial Intelligence and Policing, hosted by the International Association of Chiefs of Policein San Diego, California, law enforcement leaders from around the globe shared their insights on the emergence and integration of Artificial Intelligence (AI) in policing.

The eminent panel featured Donald Zoufal of the Illinois Association of Chiefs of Police, Shawna Coxon from An Garda Sochna, Dublin, Jonathan Lewin from INTERPOL Washington, Oscar Wijsman of the Netherlands Police Department, and Craig Allen, the chair of IACP Communications and Technology Committee.

One of the unequivocal conclusions from the discussion was that AI will be a game-changer for law enforcement, transforming traditional intelligence-led policing through a gamut of technological advancements. The copious amount of data now available, coupled with cloud technology and open source tools, lays the groundwork for leveraging machine learning, large language models (LLMs), and soon, quantum technology, to enhance various facets of policing.

AI's potential is vast, encompassing everything from street patrolling to criminal investigations and managerial functions. The technology will automate both simple and complex tasks, redefine the interaction between humans and machines, and enable law enforcement agencies to visualize phenomena and discern patterns critical for predictive policing and resource allocation. By revealing criminal networks and markets, understanding seized dataand monitoring the health of colleagues, AI is set to become an indispensable ally in the quest for justice.

In the Netherlands, where the National Police force numbers 65,000, over 100 personnel are already dedicated to the application of AI, organizing their efforts through a hub-and-spokes model. This model facilitates a common business process, robust AI and machine learning infrastructure, and a readiness in data management to accommodate new roles such as data analysts, ethicistsand digital forensic investigators, highlighting the broad impact of AI on organizational structures.

Despite the bright prospects, challenges loom large. The rapid pace of AI development is outstripping the relatively modest governmental regulatory efforts, creating a regulatory vacuum. Law enforcement agencies are encouraged to conduct a rigorous AI risk evaluation to understand, measure, and manage the risks to individuals, the organization, and the broader ecosystem. Key steps include mapping AI applications, tailoring contracting approachesand developing governance structures to ensure transparency, accountability and human review in AI implementations.

The panelists shared examples of how AI is already in use, highlighting its utility in facial and license plate recognition, fraud detection, text data analysisand victim support, to name a few. However, the flip side also came into focus as AI's potential for criminal misuse in fraud, scams, misinformation campaignsand cybercrime was discussed, underscoring the importance of a well-thought-out approach to AI integration in policing.

As the landscape of law enforcement technology enters this exciting new chapter, law enforcement leaders are urged to develop roadmaps for the near, mediumand long-term, build knowledge within their departments, and establish robust policies addressing the use, evaluationand human intervention in AI applications.

This illuminative session underlines a global acknowledgment of AI's potential to significantly elevate policing efforts, provided a balanced and well-regulated approach is adopted. It is a clarion call for law enforcement agencies to actively engage with, understandand integrate AI in a manner that augments their capabilities while safeguarding against the risks inherent in this powerful technology.

As the horizon of law enforcement broadens with the advent of Artificial Intelligence (AI), it's imperative for police chiefs to take decisive actions in acquainting and preparing their departments for this technological pivot. Here's a checklist of immediate actions that can set the groundwork for integrating AI in policing:

1. Definition and understanding of AI:

Publish a clear definition of what AI entails for your agency.

Foster an environment of learning and inquiry regarding AI and its potential impact on policing.

2. Current technology audit:

Conduct an audit to identify where AI or AI-capable technology is already in use within your department.

Evaluate the effectiveness and efficiency of current AI applications.

3. Develop a strategic AI roadmap:

Draft a near, medium, and long-term roadmap outlining the adoption and integration of AI in your departments operations.

Identify priority areas where AI could have the most significant positive impact.

4. Establish a governance structure:

Create a governance structure to oversee the ethical and responsible use of AI.

Designate roles and responsibilities for AI oversight, ensuring clear lines of accountability.

5. Policy development:

Develop and publish policies addressing the use of AI, including a statement of purpose, use policy, ongoing technology evaluation, human review, and intervention procedures.

Ensure policies are easily accessible and comprehensible to all members of the department.

6. Community engagement:

Engage with the community to explain how AI will be used in policing and to gather feedback.

Establish channels for ongoing community input and transparency regarding AI use.

7. Risk assessment:

Conduct a comprehensive risk assessment to understand the potential challenges and threats associated with AI.

Establish a mechanism for continuous risk evaluation and mitigation as AI technologies evolve.

8. AI procurement planning:

Understand the problems you aim to solve with AI and make a clear business case.

Tailor your contracting approach to ensure the chosen solutions meet your departments needs and adhere to established policies.

9. Training and education:

Implement training programs to build knowledge and skills necessary for leveraging AI.

Encourage cross-departmental education to ensure a unified approach to AI adoption.

10. Partnerships and collaborations:

Foster partnerships with other law enforcement agencies, governmental bodies, and academic institutions to stay abreast of AI advancements and best practices.

Explore collaborative opportunities for shared resources and knowledge exchange.

Taking these steps will not only prepare police chiefs and their departments for the integration of AI but will also build a strong foundation for navigating the challenges and maximizing the benefits of AI in advancing public safety.

View original post here:

Advancing policing through AI: Insights from the global law ... - Police News

Posted in Ai | Comments Off on Advancing policing through AI: Insights from the global law … – Police News

Hochul announces new SUNY, IBM investments in AI – Olean Times Herald

Posted: at 2:23 am

ALBANY (TNS) New Yorks state university system is poised to ramp up its focus on generative artificial intelligence and its impact, even as Gov. Kathy Hochul indicated that statewide regulation of the emerging technology is unlikely, especially in the near future.

In remarks delivered at the State University of New Yorks inaugural symposium on artificial intelligence on Monday at the University at Albany, Hochul highlighted an incoming $20 million collaboration between IBM and UAlbany that seeks to better position the states adoption of AI. Hochul sought to portray New York as the geographical nexus of research and job growth in the newly emerging industry, while acknowledging potential pitfalls.

Think about all the possibilities, the good and the bad, Hochul told conference attendees. And if theres bad lets anticipate, lets solve for it now before it becomes embedded in the systems.

But Hochul also said she believes states, including New York, do not have much of a role to play in regulating companies pushing generative AI, which proponents and opponents alike have said has the capacity to cause fundamental changes to entire industries. Hochul said the federal government should instead oversee any regulation of artificial intelligence.

Many have touted artificial intelligences ability to improve automation and some workplaces; others have pointed to potential loss of jobs and increasing risks associated with deep fake technology or other unsavory aspects of machine learning.

For individual states to piecemeal this off, it certainly makes it enormously complicated for all the companies to have 50 states, different regulatory schemes to deal with, Hochul told reporters. I will say this, Im very open-minded to innovations, ideas coming to us or areas where we can protect our citizens.

SUNY will also launch an AI research group tasked with developing the university systems approach to the technology across its dozens of research institutions. Hochul said that the conclusions researchers reach could shape what role governments, including New Yorks, should play in the technology.

SUNY Chancellor John B. King Jr. called the system-wide efforts to monitor and pioneer AI-related initiatives crucial to define the quality of our states future.

Here are your picks for the Best of the Capital Region 2023

We tallied the votes from this years Best of the Capital Region contest in 100 categories.

Artificial intelligence development and application into education and the workforce has exploded over the past year, with much of the popular attention focused on how we can assure ourselves that student work and even faculty work are original in the context of ChatGPT, King said. But we all understand the potential and the risks are so much bigger than that. AI is intruding upon, improving upon and impacting nearly every area of our lives.

ChatGPT is a language processing tool that uses AI technology to allow someone to have human-like conversations with a chatbot.

In recent years, New York has attempted to position itself as a hub for artificial intelligence and machine learning along with other technological research and development.

Hochul included $75 million last year for the University of Albany, an aggressive early adopter of the technology.

See the original post:

Hochul announces new SUNY, IBM investments in AI - Olean Times Herald

Posted in Ai | Comments Off on Hochul announces new SUNY, IBM investments in AI – Olean Times Herald

Nvidia’s banking on TensorRT to expand its generative AI dominance – The Verge

Posted: at 2:23 am

Nvidia announced that its adding support for its TensorRT-LLM SDK to Windows and models like Stable Diffusion as it aims to make large language models (LLMs) and related tools run faster. TensorRT speeds up inference, the process of going through pretrained information and calculating probabilities to come up with a result like a newly generated Stable Diffusion image. With this software, Nvidia wants to play a bigger part on that side of generative AI.

TensorRT-LLM breaks down LLMs like Metas Llama 2 and other AI models like Stability AIs Stable Diffusion to let them run faster on Nvidias H100 GPUs. The company said that by running LLMs through TensorRT-LLM, this acceleration significantly improves the experience for more sophisticated LLM use like writing and coding assistants.

This way Nvidia can not only provide the GPUs that train and run LLMs but also provide the software that allows models to run and work faster so users dont seek other ways to make generative AI cost-efficient. The company said TensorRT-LLM will be available publicly to anyone who wants to use or integrate it and can access the SDK on its site.

Nvidia already has a near monopoly on the powerful chips that train LLMs like GPT-4 and to train and run one, you typically need a lot of GPUs. Demand has skyrocketed for its H100 GPUs; estimated prices have reached $40,000 per chip. The company announced a newer version of its GPU, the GH200, coming next year. No wonder Nvidias revenues increased to $13.5 billion in the second quarter.

But the world of generative AI moves fast, and new methods to run LLMs without needing a lot of expensive GPUs have come out. Companies like Microsoft and AMD announced theyll make their own chips to lessen the reliance on Nvidia.

And companies have set their sights on the inference side of AI development. AMD plans to buy software company Nod.ai to help LLMs specifically run on AMD chips, while companies like SambaNova already offer services that make it easier to run models as well.

Nvidia, for now, remains the hardware leader in generative AI, but it already looks like its angling for a future where people dont have to depend on buying huge numbers of its GPUs.

Continued here:

Nvidia's banking on TensorRT to expand its generative AI dominance - The Verge

Posted in Ai | Comments Off on Nvidia’s banking on TensorRT to expand its generative AI dominance – The Verge

AI expands from MRFs to vehicles – Plastics Recycling Update

Posted: at 2:23 am

Artificial intelligence is now well established in MRFs as a tool for sorting material and dramatically reducing contamination. Now, multiple companies are taking AI to an earlier stage of the recycling process by mounting cameras on collection trucks.

The goal is to try to stop contamination at the source and improve worker safety, said Ken Tierney, product manager at AMCS Group, one of the companies offering AI technology for collection. AMCS recently announced that it has deployed its Vision AI solution for the first time on Peninsula Sanitary Services trucks in California.Our drive and goal is to automate as many of these processes as we can, Tierney told Resource Recycling. If we can reduce the load on the driver, theres a safety aspect there as well.

Meanwhile, Canada-based Prairie Robotics has been working with AI and collection vehicles for over five years. Sam Dietrich, CEO of Prairie Robotics, said the interest in using AI and automation in vehicles has been steadily increasing, making it an exciting time for the recycling industry.

Prairie Robotics started out in response to a Saskatchewan province RFP to use AI to monitor what was being dumped from collection trucks into landfills. Dietrich said after building that application for the province, the team realized that most people were not interested in the landfill data, because by then its too late. What people wanted was more data at the source, so we tried to dig into that more.

That led Prairie Robotics to install cameras on recycling and organics collection vehicles to identify contaminants at the individual household level.

What weve also done besides the data analysis and reporting side is build out a full education suite, Dietrich said. We can send personalized postcards, texts, emails, in-app notifications, to a resident and inform them of their specific sorting mistakes.

AMCS had a similar journey. Tierney said the company specializes in transportation operations, so it was aware of the problems MRFs faced with contamination.

He said AMCS started looking into the situation and leaned on its familiarity with cameras and sensor technology to develop a solution in partnership with the University of Limerick. It then worked with Peninsula Sanitary Service for about a year to pilot and further develop the technology.

Thats how we got to where we are today, he said. It was kind of a process of, Okay, we understand what the obstacles are. Are there any solutions out there at the moment that can meet that challenge? We discovered there was not and said, Look, how can we then tackle that problem?'

On Peninsula Sanitary Services trucks, AMCS equipped two cameras: one focused on the hopper and one to check whether bins are overfilled. The lift of the front loader triggers the cameras to record so AMCS can use GPS coordinates and other logistic information to connect bins to households.

Currently, six of Peninsula Sanitary Services trucks have been fitted with the cameras, with four more due to be fitted in the new year.

Extended producer responsibility legislation and other reporting requirements have helped drive the rise of AI in an industry that often lacks solid data.

Dietrich said working in British Columbia, where extended producer responsibility for various types of packaging has been in place for decades, helped Prairie Robotics learn a lot about how the data it collects can be used for EPR.

I think EPR is going to be a driving force, he said. And in terms of how we use AI to capture the data thats needed, I think were still in the early days, but its an exciting movement that were seeing.

He added that AI and automation also provide needed customer feedback to improve recycling.

Were the only industry that doesnt provide personal feedback to our users, he said. When you look at water, electricity, heating, you get monthly feedback in the form of a monthly bill. You know what your usage is.

Tierney said for AMCS, it was less about AI specifically and more about picking the right technology to solve the problem.

To automate data collection and analysis, AI is definitely that sweet spot, he said. It definitely fits in there.

Some of the legislative pressure is more indirect, Tierney said. For example, requirements to reduce the level of contamination, means you first need to measure the baseline and then track changes. Thats where the AI and automation systems come in.

As with any developing technology, there are still limitations that Tierney and Dietrich run up against.

Tierney said the first thing AMCS had to contend with was the challenging visual conditions in a hopper and building up the algorithm.

In a MRF, material is moving at a consistent speed, you can control the lighting conditions and its always the same, he said. Its easy to see the material. When youre on the collection vehicle, youre looking into a hopper. Every time you empty a container the picture looks different.

Dietrich also noted that to use AI in a vehicle, you need to not only identify the material, but track it as it moves, as well.

Very early on we realized items in a hopper can linger in a hopper for literally hours, it would seem, depending on the item, he said. We spent a lot of time in our early days recording videos and benchmarking.

However, as the technology becomes more widespread and refined, Dietrich is looking forward to being able to also use it to alert drivers if a hazardous waste item is put in a truck.

Its a popular customer request, he said, and something Prairie Robotics is still testing. The items can be taught to the AI easily enough, but Dietrich said the trick is then deciding what action happens.

What do you do with that data? he said. Were having conversations with customers on do you have to turn the truck off and stop if youre detecting a propane tank? This is not a situation for a postcard, its a situation for the driver.

Prairie Robotics is also training its systems to identify more kinds of contaminants and expanding its education platforms in partnership with its customers and how to use the data its collected for other things, such as increasing participation or better cart management.

Thats the direction we see ourselves going, he said. How do you use this data weve already captured to help us in other ways?

Tierney said soon, automation and AI will be the industry standard. Not only will that improve data collection, but it could attract a whole new generation of workers.

It makes the industry more attractive to the younger generations, he said. In the past, if you look at the waste and recycling industry it was not seen as the nicest or the most sought after industry to go into. But if you stand back and look at it now and look at the level of automation and the use of tools like AI and sensors and cameras systems that have been fitted not only on the vehicles but the facilities as well anyone interested in technology, thats really a growing area in the waste and recycling industry.

He also sees the approach of self-driving vehicles, which will make automated data collection even more necessary.

We need to develop these technologies now to have them ready, he said.

A version of this story appeared in Resource Recyclingon Oct. 16.

Original post:

AI expands from MRFs to vehicles - Plastics Recycling Update

Posted in Ai | Comments Off on AI expands from MRFs to vehicles – Plastics Recycling Update

AI Reads Ancient Scroll Charred by Mount Vesuvius in Tech First – Scientific American

Posted: at 2:23 am

A 21-year-old computer-science student has won a global contest to read the first text inside a carbonized scroll from the ancient Roman city of Herculaneum, which had been unreadable since a volcanic eruption inAD79 the same one that buried nearby Pompeii. The breakthrough could open up hundreds of texts from the only intact library to survive from Greco-Roman antiquity.

Luke Farritor, who is at the University of NebraskaLincoln, developed a machine-learning algorithm that has detected Greek letters on several lines of the rolled-up papyrus, including (porphyras), meaning purple.Farritor used subtle, small-scale differences in surface texture to train his neural network and highlight the ink.

When I saw the first image, I was shocked, says Federica Nicolardi, a papyrologist at the University of Naples in Italy and a member of the academic committee that reviewed Farritors findings. It was such a dream, she says. Now, I can actually see something from the inside of a scroll.

Hundreds of scrolls were buried by Mount Vesuvius in OctoberAD79, when the eruption left Herculaneum under 20 metres of volcanic ash. Early attempts to open the papyri created a mess of fragments, and scholars feared the remainder could never be unrolled or read. These are such crazy objects. Theyre all crumpled and crushed, says Nicolardi.

The Vesuvius Challenge offers a series of awards, leading to a main prize of US$700,000 for reading four or more passages from a rolled-up scroll. On 12 October, the organizers announced that Farritor has won the first letters prize of $40,000 for reading more than 10 characters in a 4-square-centimetre area of papyrus. Youssef Nader, a graduate student at the Free University of Berlin, is awarded $10,000 for coming second.

To finally see letters and words inside a scroll is extremely exciting,says Thea Sommerschield, a historian of ancient Greece and Rome at Ca Foscari University of Venice, Italy. The scrolls were discovered in the eighteenth century, when workmen came across the remains of a luxury villa that might have belonged to the family of Julius Caesars father-in-law. Deciphering the papyri, Sommerschield says, could revolutionize our knowledge of ancient history and literature.Most classical texts known today are the result of repeated copying by scribes over centuries. By contrast, the Herculaneum library contains works not known from any other sources, direct from the authors.

Until now, researchers were able to study only opened fragments. A few Latin works have been identified, but most of these contain Greek texts relating to the Epicurean school of philosophy. There are parts ofOn Nature, written by Epicurus himself, and works by a little-known philosopher named Philodemus on topics such as vices, music, rhetoric and death. It has been suggested that the library might once have been his working collection. But more than 600 scrolls most held in the National Library in Naples, with a handful in the United Kingdom and France remain intact and unopened. And more papyri could still be found on lower floors of the villa, which have yet to be excavated.

Brent Seales, a computer scientist who has helped set up the Vesuvius Challenge, and his team spent years developing methods to virtually unwrap the vanishingly thin layers using X-ray computed tomography (CT) scans, and to visualize them as a series of flat images. In 2016 Seales, who is at the University of Kentucky in Lexington, he reportedusing the technique to read a charred scroll from En-Gedi in Israel, revealing sections of the Book of Leviticus part of the Jewish Torah and the Christian Old Testament written in the third or fourth centuryAD. But the ink on the En-Gedi scroll contains metal, so it glows brightly on the CT scans. The ink on the older Herculaneum scrolls is carbon-based, essentially charcoal and water, with the same density in scans as the papyrus it sits on, so it doesnt show up at all.

Seales realized that even with no difference in brightness, CT scans might capture tiny differences in texture that can distinguish areas of papyrus coated with ink. To prove it, he trained an artificial neural network to read letters in X-ray images of opened Herculaneum fragments. Then, in 2019, he carried two intact scrolls from the Institut de France in Paris to the Diamond Light Source, a synchrotron X-ray facility near Oxford, UK, to scan them at the highest resolution yet (48 micrometres per 3D image element, or voxel).

Reading intact scrolls was still a huge task, however, so the team released all of its scans and code to the public and launched the Vesuvius Challenge. We all agreed we would rather get to the reading of whats inside sooner, than try to hoard everything, says Seales.

Around 1,500 teams were soon discussing and collaborating through the gamer chat platform Discord. The prizes were designed in phases, and as each milestone is reached, the winning code is released for everyone to build on. Farritor, who had always been interested in history and taught himself Latin as a child, got involved early on.

In parallel, Seales team worked on the virtual unwrapping, releasing images of the flattened pieces for the contestants to analyse. A key moment came in late June, when one competitor pointed out that on some images, ink was occasionally visible to the naked eye, as a subtle texture that was soon dubbed crackle.Farritor immediately focused on the crackle, looking for further hints of letters.

One evening in August, he was at a party when he received an alert that a fresh segment had been released, with particularly prominent crackle. Connecting through his phone, he ran his algorithm on the new image. Walking home an hour later, he pulled out his phone and saw five letters on the screen. I was jumping up and down, he says. Oh my goodness, this is actually going to work. From there, it took just days to refine the model and identify the ten letters required for the prize.

Papyrologists are excited, too. The word purple has not yet been read in the opened Herculaneum scrolls. Purple dye was highly sought-after in ancient Rome and was made from the glands of sea snails, so the term could refer to purple colour, robes, the rank of people who could afford the dye or even the molluscs. But more important than the individual word is reading anything at all, says Nicolardi. The advance gives us potentially the possibility to recover the text of a whole scroll,including the title and author, so that works can be identified and dated.

Yannis Assael, a staff research scientist at Google DeepMind in London, describes the Vesuvius Challenge as unique and inspirational.But it is part of a broader shift, he notes, in which artificial intelligence (AI) is increasingly aiding the study of ancient texts. Last year, for example, Assael and Sommerschieldreleased an AI tool called Ithaca, designed to help scholars glean the date and origins of unidentified ancient Greek inscriptions, and make suggestions for text to fill any gaps. It now receives hundreds of queries per week, and similar efforts are being applied to languages from Korean to Akkadian, which was used in ancient Mesopotamia.

Seales hopes machine learning will open up what he calls the invisible library.This refers to texts that are physically present, but no one can see, including parchment used in medieval book bindings; palimpsests, in which later writing obscures a layer beneath; and cartonnage, in which scraps of old papyrus were used to make ancient Egyptian mummy cases and masks.

For now, however, all eyes are on the Vesuvius Challenge. The deadline for the grand prize is 31 December, and Seales describes the mood as unbridled optimism.Farritor, for one, has already run his models on other segments of the scroll and is seeing many more characters appear.

This article is reproduced with permission and was first published on October 12,2023.

Read more here:

AI Reads Ancient Scroll Charred by Mount Vesuvius in Tech First - Scientific American

Posted in Ai | Comments Off on AI Reads Ancient Scroll Charred by Mount Vesuvius in Tech First – Scientific American

Page 3«..2345..1020..»