At Morgan State, seeking AI that is both smart and fair – Baltimore Sun

Your application for college or a mortgage loan. Whether youre correctly diagnosed in the doctors office, make it onto the short list for a job interview or get a shot at parole.

That bias can enter into these often life-altering decisions is nothing new. But today, with artificial intelligence assisting everyone from college admission directors to parole boards, a group of researchers at Morgan State University says the potential for racial, gender and other discrimination is amplified by magnitudes.

You automate the bias, you multiply and expand the bias, said Gabriella Waters, a director at a Morgan State center seeking to prevent just that. If youre doing something wrong, its going to do it in a big way.

Waters directs research and operations for the Baltimore universitys Center for Equitable Artificial Intelligence and Machine Learning Systems, or CEAMLS for short. Pronounced seamless, it indeed brings together specialists from across disciplines ranging from engineering to philosophy with the goal of harnessing the power of artificial intelligence while ensuring it doesnt introduce or spread bias.

AI is a catchall phrase for systems that can process large amounts of data quickly and, mimicking human cognitive functions such as detecting patterns, predict outcomes and recommend decisions.

But therein lies both its benefits and pitfalls: as data points are introduced, so, too, can bias enter in. Facial recognition systems were found more likely to misidentify Black and Asian people, for example, and Amazon dumped a recruiting program that favored male over female applicants.

Bias also cropped up in an algorithm used to assess the relative sickness of patients, and thus the level of treatment they should receive, because it was based on the amount of previous spending on health care meaning Black people, who are more likely to have lower incomes and less access to care to begin with, were erroneously scored as healthier than they actually were.

Dont blame the machines, though. They can only do what they do with what theyre given.

Its human beings that are the source of the data sets being correlated, Waters said. Not all of this is intentional. Its just human nature.

Data can obscure the actual truths, she said. You might find that ice cream sales are high in areas where a lot of shark attacks occur, Waters said, but that, of course, doesnt mean one causes the other.

The center at Morgan was created in July 2022 to find ways to address problems that already underlie existing AI systems, and create new technologies that avoid introducing bias.

As a historically Black university that has been boosting its research capacity in recent years, Morgan State is poised to put its own stamp on the AI field, said Kofi Nyarko, who is the CEAMLS director and a professor of electrical and computer engineering.

Morgan has a unique position here, Nyarko said. Yes, we have the experts in machine learning that we can pull from the sciences.

But also we have a mandate. We have a mission that seeks to not only advance the science, but make sure that we advance our community such that they are involved in that process and that advancement.

Morgan States AI research has been fueled by an influx of public and private funding by its calculations, nearly $18.5 million over the past 3 years.Many of the grants come from federal agencies, including the Office of Naval Research, which gave the university $9 million, the National Science Foundation and the National Institutes of Health.

Throughout the state, efforts are underway to catch up with the burgeoning field of AI, tapping into its potential while working to guard against any unintended consequences.

The General Assembly and Democratic Gov. Wes Moores administration have both been delving into AI, seeking to understand how it can be used to improve state government services and ensure that its applications meet values such as equity, security and privacy.

That was was part of the agenda of a Nov. 29 meeting of the General Assemblys Joint Committee on Cybersecurity, Information Technology, and Biotechnology, where some of Moores newly appointed technology officials briefed state senators and delegates on the use of the rapidly advancing technology in state government.

Its all moving very fast, said Nishant Shah, who in August was named Moores senior advisor for responsible AI. We dont know what we dont know.

Shah said hell be working to develop a set of AI principles and values that will serve as a North Star for procuring AI systems and monitoring them for any possible harm. State techstaff are also doing an inventory of AI already in use very little, according to a survey that drew limited response this summer and hoping to increase the knowledge and skills of personnel across the government, he said.

At Morgan, Nyarko said he is heartened by the amount of attention in the state and also federally on getting AI right. The White House, for example, issued an executive order in October on the safe and responsible use of the technology.

There is a lot of momentum now, which is fantastic, Nyarko said. Are we there yet? No. Just as the technology evolves, the approach will have to evolve with it, but I think the conversations are happening, which is great.

Nyarko, who leads` Morgans Data Engineering and Predictive Analytics (DEPA) Research Lab, is working on ways to monitor the performance of cloud-based systems and whether they alter depending on variables such as a persons race or ethnicity.Hes also working on how to objectively measure the very nebulous concept of fairness could there be a consensus within the industry, for example, on benchmarks that everyone would use to test their systems performance?

Think about going to the grocery store and picking up a package with a nutrition label on it, Nyarko said. Its really clear when you pick it up you know what youre getting.

What would that look like for the AI model? Pick up a product and flip it over, so to speak, metaphorically see what its strengths are, what its weaknesses are, in what areas what groups are impacted one way or the other.

The centers staff and students ranging from undergrads to post-docs are working on multiple projects: A childs toy car is parked in one room, awaiting further work to make it self-driving. There are autonomous wheelchairs, being tested at Baltimore/Washington International Thurgood Marshall Airport, where hopefully one day they can be ordered like an Uber.

Waters, who directs the Cognitive and Neurodiversity AI Lab at Morgan, is working on applications to help in diagnosing autism and assist those with autism in developing skills. With much autism research based on a small pool, usually boys and particularly white boys, she is working on using AI to observe and track children of other racial and ethnic groups in their family settings, seeking to tease out cultural differences that may mask symptoms of autism.

She is also working on using augmented reality glasses and AI to develop individualized programs for those with autism. The glasses would put an overlay on the real environment, prompting and rewarding the wearer to be more vocal, for example, or using a cartoon character to point to a location they should go to, such as a bathroom.

Ulysses Muoz/Baltimore Sun

While the center works on projects that could find their way onto the marketplace, it maintains its focus on providing, as its mission statement puts it, thought leadership in the application of fair and unbiased technology.

One only has to look at previous technologies that took unexpected turns from their original intent, said J. Phillip Honenberger, who joined the center from Morgans philosophy and religious studies department.He specializes in the intersection of philosophy and science, and sees the centers work as an opportunity to get ahead of whatever unforeseen implications AI may have for our lives.

Any socially disruptive technology almost never gets sufficient deliberation and reflection, Honenberger said. They hit the market and start to affect peoples lives before people really have a chance to think about whats happening.

Look at the way social media affected the political space, Honenberger said. No one thought, he said, Were going to build this thing to connect people with their friends and family, and its going to change the outcome of elections, its going to lead to polarization and disinformation and all the other negative effects.

Technology tends to have a reflection and deliberation deficit, Honenberger said.

But, he said, that doesnt mean innovation should be stifled because it might lead to unintended consequences.

The solution is to build ethical capacity, build reflective and deliberative capacity, he said, and thats what were in the business of doing.

See more here:

At Morgan State, seeking AI that is both smart and fair - Baltimore Sun

Posted in Ai

Opinion | A.I. Use by Law Enforcement Must Be Strictly Regulated – The New York Times

One of the most hopeful proposals involving police surveillance emerged recently from a surprising quarter the federal Office of Management and Budget. The office, which oversees the execution of the presidents policies, has recommended sorely needed constraints on the use of artificial intelligence by federal agencies, including law enforcement.

The offices work is commendable, but shortcomings in its proposed guidance to agencies could still leave people vulnerable to harm. Foremost among them is a provision that would allow senior officials to seek waivers by arguing that the constraints would hinder law enforcement. Those law enforcement agencies should instead be required to provide verifiable evidence that A.I. tools they or their vendors use will not cause harm, worsen discrimination or violate peoples rights.

As scholars of algorithmic tools, policing and constitutional law, we have witnessed the predictable and preventable harms from law enforcements use of emerging technologies. These include false arrests and police seizures, including a family held at gunpoint, after people were wrongly accused of crimes because of the irresponsible use of A.I.-driven technologies including facial recognition and automated license plate readers.

Consider the cases of Porcha Woodruff, Michael Oliver and Robert Julian-Borchak Williams. All were arrested between 2019 and 2023 after they were misidentified by facial recognition technology. These arrests had indelible consequences: Ms. Woodruff was eight months pregnant when she was falsely accused of carjacking and robbery; Mr. Williams was arrested in front of his wife and two young daughters as he pulled into his driveway from work. Mr. Oliver lost his job as a result.

All are Black. This should not be a surprise. A 2018 study co-written by one of us (Dr. Buolamwini) found that three commercial facial-analysis programs from major technology companies showed both skin-type and gender biases. The darker the skin, the more often the errors arose. Questions of fairness and bias persist about the use of these sorts of technologies.

Errors happen because law enforcement deploys emerging technologies without transparency or community agreement that they should be used at all, with little or no consideration of the consequences, insufficient training and inadequate guardrails. Often the data sets that drive the technologies are infected with errors and racial bias. Typically, the officers or agencies face no consequences for false arrests, increasing the likelihood they will continue.

The Office of Management and Budget guidance, which is now being finalized after a period of public comment, would apply to law enforcement technologies such as facial recognition, license-plate readers, predictive policing tools, gunshot detection, social media monitoring and more. It sets out criteria for A.I. technologies that, without safeguards, could put peoples safety or well-being at risk or violate their rights. If these proposed minimum practices are not met, technologies that fall short would be prohibited after next Aug. 1.

Here are highlights of the proposal: Agencies must be transparent and provide a public inventory of cases in which A.I. was used. The cost and benefit of these technologies must be assessed, a consideration that has been altogether absent. Even if the technology provides real benefits, the risks to individuals especially in marginalized communities must be identified and reduced. If the risks are too high, the technology may not be used. The impact of A.I.-driven technologies must be tested in the real world, and be continually monitored. Agencies would have to solicit public comment before using the technologies, including from the affected communities.

The proposed requirements are serious ones. They should have been in place before law enforcement began using these emerging technologies. Given the rapid adoption of these tools, without evidence of equity or efficacy and with insufficient attention to preventing mistakes, we fully anticipate some A.I. technologies will not meet the proposed standards and their use will be banned for noncompliance.

The overall thrust of the federal A.I. initiative is to push for rapid use of untested technologies by law enforcement, an approach that too often fails and causes harm. For that reason, the Office of Management and Budget must play a serious oversight role.

Far and away, the most worrisome elements in the proposal are provisions that create the opportunity for loopholes. For example, the chief A.I. officer of each federal agency could waive proposed protections with nothing more than a justification sent to the Office of Management and Budget. Worse yet, the justification need only claim an unacceptable impediment to critical agency operations the sort of claim law enforcement regularly makes to avoid regulation.

This waiver provision has the potential to wipe away all that the proposal promises. No waiver should be permitted without clear proof that it is essential proof that in our experience law enforcement typically cannot muster. No one person should have the power to issue such a waiver. There must be careful review to ensure that waivers are legitimate. Unless the recommendations are enforced strictly, we will see more surveillance, more people forced into unjustified encounters with law enforcement, and more harm to communities of color. Technologies that are clearly shown to be discriminatory should not be used.

There is also a vague exception for national security, a phrase frequently used to excuse policing from legal protections for civil rights and against discrimination. National security requires a sharper definition to prevent the exemption from being invoked without valid cause or oversight.

Finally, nothing in this proposal applies beyond federal government agencies. The F.B.I., the Transportation Security Administration and other federal agencies are aggressively embracing facial recognition and other biometric technologies that can recognize individuals by their unique physical characteristics. But so are state and local agencies, which do not fall under these guidelines. The federal government regularly offers federal funding as a carrot to win compliance from state and local agencies with federal rules. It should do the same here.

We hope the Office of Management and Budget will set a higher standard at the federal level for law enforcements use of emerging technologies, a standard that state and local governments should also follow. It would be a shame to make the progress envisioned in this proposal and have it undermined by backdoor exceptions.

Joy Buolamwini is the founder of the Algorithmic Justice League, which seeks to raises awareness about the potential harms of artificial intelligence, and the author of Unmasking AI: My Mission to Protect What Is Human in a World of Machines. Barry Friedman is a professor at New York Universitys School of Law and the faculty director of its Policing Project. He is the author of Unwarranted: Policing Without Permission.

Read more from the original source:

Opinion | A.I. Use by Law Enforcement Must Be Strictly Regulated - The New York Times

Posted in Ai

UBS boosts AI revenue forecast by 40%, calls industry the ‘tech theme of the decade’ – CNBC

UBS is getting more bullish on the outlook for artificial intelligence and bracing for another prosperous year for the "tech theme of the decade." The firm lifted its revenue forecast for AI by 40%. UBS expects revenues to grow by 15 times, from $28 billion in 2022 to $420 billion by 2027, as companies invest in infrastructure for models and applications. In comparison, it took the smart devices industry more than 10 years for its revenues to grow by 15 times. "As a result, we believe AI will remain the key theme driving global tech stocks again in 2024 and the rest of the decade," wrote Sundeep Gantori, adding that AI growth could result in consolidation that favors the giants getting bigger and "industry leaders with deep pockets and first-mover advantages." AI was the trade to invest behind in 2023, boosting chipmaker Nvidia nearly 240% as Wall Street bet on its graphics processing units powering large language models. Other semiconductor stocks, including Advanced Micro Devices, Broadcom and Marvell Technology, also rallied on the theme, with the VanEck Semiconductor ETF (SMH) notching its second-best year on record with a 72% gain. The world may only be in the early innings of a yearslong AI wave, but UBS views semiconductors and software as the best areas to position in 2024. Both industries should post double-digit profit growth and operating margins exceeding 30%, and the 22% average for the global IT industry. "Semiconductors, while cyclical, are well positioned to benefit from solid near-term demand for AI infrastructure," the firm said. "Meanwhile, software, with broadening AI demand trends from applications and models, is a defensive play, thanks to its strong recurring revenue base." Within the semiconductor industry, UBS favors logic, memory, capital equipment and foundry names, while companies exposed to office productivity, cloud and models appear best situated in software. CNBC's Michael Bloom contributed reporting.

See more here:

UBS boosts AI revenue forecast by 40%, calls industry the 'tech theme of the decade' - CNBC

Posted in Ai

Intel Hires HPE’s Justin Hotard To Lead Data Center And AI Group – CRN

By becoming the leader of Intels Data Center and AI Group, former Hewlett Packard Enterprise executive Justin Hotard will take over a business that is fighting competition on multiple fronts, including against AMD in the x86 server CPU market and Nvidia in the AI computing space.

Intel said it has hired Hewlett Packard Enterprise rising star Justin Hotard to lead the companys prized Data Center and AI Group.

The Santa Clara, Calif.-based chipmaker said Wednesday that Hotard will become executive vice president and general manager of the business, effective Feb. 1. He will succeed Sandra Rivera, who moved to lead Intels Programmable Solutions Group as a new stand-alone business under the companys ownership on Monday.

Hotard was most recently executive vice president and general manager of high-performance computing, AI and labs at HPE, where he was responsible for delivering AI capabilities to customers addressing some of the worlds most complex problems through data-intensive workloads, according to the semiconductor giant.

By becoming the leader of Intels Data Center and AI Group, Hotard will take over a business that is fighting competition on multiple fronts: against AMD in the x86 server CPU market, against Nvidia, AMD and smaller firms in the AI computing space, and against the rise of Arm-based server chips from Ampere Computing, Amazon Web Services and Microsoft Azure.

Just last month, the Data Center and AI Group marked the launch of its fifth-generation Xeon processors, which the company said deliver AI acceleration in every core on top of outperforming AMDs latest EPYC chips around the clock. And the business is also fighting its way to win market share from Nvidia in the AI computing market with not just its Xeon CPUs but also its Gaudi accelerator chips and a differentiated software strategy.

The semiconductor giant is making these moves as part of Intel CEO Pat Gelsingers grand comeback plan, which seeks to put the company ahead of Asian contract chip manufacturers TSMC and Samsung in advanced chip-making capabilities by 2025 to unlock new momentum.

Justin is a proven leader with a customer-first mindset and has an impressive track record in driving growth and innovation in the data center and AI, Gelsinger in a statement.

Justin is committed to our vision to create world-changing technologies and passionate about the critical role Intel will play in empowering our customers for decades to come, he added.

Go here to read the rest:

Intel Hires HPE's Justin Hotard To Lead Data Center And AI Group - CRN

Posted in Ai

AI predictions for the new year – POLITICO – POLITICO

Mass General Brigham physicians are envisioning the future of AI in medicine. | Courtesy of Mass General Brigham

Will 2024 be the year that artificial intelligence transforms medicine?

Leaders at one of Americas top hospital systems, Mass General Brigham in Boston, might not go that far, but they have high hopes.

Their new years predictions span departments and specialties, some patient-facing and others for the back office.

Heres what they foresee:

Neurosurgery could see advancements in AI and machine learning, according to Dr. Omar Arnaout, a neurosurgeon at Brigham and Womens Hospital. The tech could better tailor treatment plans to patients, more accurately predict outcomes and add new precision to surgeries.

Radiologys continued integration of AI could revolutionize the accuracy of diagnostics and treatments, said Dr. Manisha Bahl, a physician investigator in Mass Generals radiology department. And she sees liquid biopsies taking on more of a role as AI makes it easier to detect biomarkers.

Patient chatbots will likely become more popular, according to Dr. Marc Succi, executive director of Mass General MESH Incubator, a center at the health system that, with Harvard Medical School, looks to create new approaches to health care. That could make triaging more efficient.

Smarter robots could even come to patient care because of AI, according to Randy Trumbower, director of the INSPIRE Lab, affiliated with Mass General Brigham. He and his team are studying semi-autonomous robots that use AI to better care for people with severe spinal cord injuries.

And AI tools themselves could see innovations that make them more appealing for medical use, Dr. Danielle Bitterman, an assistant professor at BWH and a faculty member on the Artificial Intelligence in Medicine program at Mass General Brigham, said. Breakthroughs could make AI systems more efficient and better at quickly incorporating current clinical information for the best patient care across specialties.

Granby, Colo. | Shawn Zeller/POLITICO

This is where we explore the ideas and innovators shaping health care.

Germans are taking more mental health days off work, Die Welt reports. The number hit a record in 2022 and doubled over the previous decade.

Share any thoughts, news, tips and feedback with Carmen Paun at [emailprotected], Daniel Payne at [emailprotected], Ruth Reader at [emailprotected] or Erin Schumaker at [emailprotected].

Send tips securely through SecureDrop, Signal, Telegram or WhatsApp.

Adopting new technology is as much a cultural issue as a technical one, the AMA says. | Anne-Christine Poujoulat/AFP via Getty Images

Health care providers can devise new ways to care for patients with digital tools, but the people building the tech and running hospitals need to be thoughtful about implementation.

All sides of the business must work together to ensure the success and safety of the new tech, including AI-driven tools, according to guidance from the American Medical Association.

Many hurdles standing in the way of digital health models arent technical but cultural and operational, the doctors group says.

To advance patient care and leverage technology along the way, the AMA says health care executives should:

Prepare to share more data. With regulators moving to safeguard the exchange of patient data, organizations can prepare to follow the rules even before a partnership forms.

Find common goals early. Once partnerships form, clarifying the purpose, value and concerns early on can improve prospects for successful implementation.

Make sure clinicians are in the loop. Builders of new data systems should keep the needs of doctors and nurses in mind to ensure the updates aid in patient care and dont get in the way.

Keep patients in mind. Patients who can access and use their health data are more engaged in their care.

Schistosomiasis affects at least 250 million people living in places without access to clean, safe drinking water and sanitation. | Marcus Perkins for Merck KGaA

Preschool children infected with schistosomiasis the second-most widespread tropical disease after malaria could finally have a treatment.

In mid-December, Europes drug regulator, the European Medicines Agency, endorsed Merck Europes Arpraziquantel, the first drug formulated specifically to treat small children who get the disease, caused by a parasitic worm that can remain in the body for many years and cause organ damage.

Some 50 million children ages 3 months to 6 years and mostly in Africa could benefit.

The European Medicines Agencys positive scientific opinion will streamline the drugs endorsement by the World Health Organization, which makes it easier for countries where the disease is endemic to register the new formulation for children.

Why it matters: Also known as bilharzia, schistosomiasis affects at least 250 million people living in places without access to clean, safe drinking water and sanitation. Its long been neglected by drugmakers.

The disease disables more than it kills, according to the WHO. In children, schistosomiasis can cause anemia, stunted growth and learning disabilities.

The effects are usually reversible through treatment with praziquantel, a drug developed in the 1970s, which Merck donates through WHO to 45 countries in sub-Saharan Africa.

The company provides up to 250 million tablets of praziquantel a year to treat school-aged children in the region, Johannes Waltz, head of Mercks Schistosomiasis Elimination Program, told Carmen. Our focus in the treatment is on school-aged children because the effect is the worst and theres hope that theres long-term effect if you treat regularly, he said.

The new formulation will make it easier to treat smaller children. They now receive part of a crushed praziquantel tablet, depending on how much they weigh.

Arpraziquantel is water-soluble. The taste is tolerable for kids, and it withstands hot environments, the European Medicines Agency said.

Original post:

AI predictions for the new year - POLITICO - POLITICO

Posted in Ai

9 Resources to Make the Most of Generative AI – WIRED

The recent wave of generative artificial intelligence services, from ChatGPT to Midjourney, are designed to be simple to use: The idea is that anyone can produce text or images using natural, non-technical language. There's a low barrier to entry.

That said, there's still a lot to learn about how to get the most out of these tools and about the technology underpinning them, especially if you want to do something truly creative with the help of these tools. Spend some time with the resources we've listed here and you'll quickly become a smarter-than-average AI operator.

From demos of what AI is capable of, to discussions of how it's best implemented, these videos, podcasts, newsletters, and blogs are well worth bookmarking if you're keen to invest in the generative AI revolution happening around us.

Inside My Head

Some of the best resources out there when it comes to generative AI are Substacks, and Inside My Head is a case in point. Run by technologist Linus Ekenstam, it features a host of useful AI-related material, covering tutorials on getting the optimum results from these tools and crafting the smartest prompts.

There's also news on the latest happenings in the world of AI, pointers on different apps that can be of help to you, and promises of much more to comeincluding an AI training course. Some posts are free to read, while others require a $10/month subscription.

Inside My Head on Substack

Towards AI

Towards AI is a one-stop online shop for all your generative AI needsit includes news and opinion, tutorials, a busy online community, and more, with artificial intelligence and the latest developments serving as the thread running through everything.

The site covers tools to help you get more out of AI, offers interviews with engineers working in the field, and of course has the obligatory email newsletter you can sign up to. There are also stories on some interesting applications of AI that you might not have thought about before.

Towards AI on Medium

The AI Podcast

The AI Podcast from Nvidia drops episodes every fortnight and covers every aspect of artificial intelligence, including generative AI. It covers the impact of the technology on gaming, science, sports, language, hardware, and more.

Each week there's a special guest or two from a different organization in the field of AI, and it's an engaging and thought-provoking resource for expanding your AI knowledge and figuring out where these various innovations might be going next.

Visit link:

9 Resources to Make the Most of Generative AI - WIRED

Posted in Ai

The next arms race: China leverages AI for edge in future wars – The Japan Times

The U.S. has enjoyed superiority in military technology since the end of the Cold War. But this edge is being rapidly eroded by its main rival, China, which seems determined to become a global leader in technologies such as artificial intelligence and machine learning (AI/ML) that could potentially revolutionize warfare.

As Beijing focuses on a defense strategy for what it calls the new era, the aim is to integrate these innovations into the Peoples Liberation Army, creating a world-class force that offsets U.S. conventional military supremacy in the Indo-Pacific and tilts the balance of power.

How important AI has become for Chinas national security and military ambitions was highlighted by President Xi Jinping during the 20th Party Congress last October, where he emphasized Beijings commitment to AI development and intelligent warfare a reference to AI-enabled military systems.

This could be due to a conflict with your ad-blocking or security software.

Please add japantimes.co.jp and piano.io to your list of allowed sites.

If this does not resolve the issue or you are unable to add the domains to your allowlist, please see this FAQ.

We humbly apologize for the inconvenience.

In a time of both misinformation and too much information, quality journalism is more crucial than ever.By subscribing, you can help us get the story right.

Read this article:

The next arms race: China leverages AI for edge in future wars - The Japan Times

Posted in Ai

Elon Musk Launches X.AI To Fight ChatGPT Woke AI, Says Twitter Is Breakeven – Forbes

Wargo/Getty Images for TIME)Getty Images for TIME

X is for everything in the world of tech billionaire Elon Musk. Its the name of his child with pop star Grimes. It was the name of his startup X.com which later became PayPal. Its the corporate name of Twitter as disclosed in court documents last week. And its the name of the his new company X.AI for which he has been recruiting AI engineers from competitors and possibly buying thousands of GPUs.

Heres what is known about X.AI so far:

Musk who co-founded ChatGPT-maker OpenAI along with Y Combinator CEO Sam Altman and PayPal alums LinkedIn cofounder Reid Hoffman and Palantir cofounder Peter Thiel in 2015, resigned his board seat in 2018 citing potential conflicts of interest as Teslas CEO in the development of the car companys self-driving features, according to The Verge.

Since the Nov. 30 launch of ChatGPT going viral, Musk has sparred with Altman over censorship of ChatGPTs responses with what OpenAI deems to be inappropriate or harmful prompts. A self-proclaimed advocate for free speech, Musk tweeted, The danger of training AI to be woke - in other words, lie - is deadly.

Last month, Musk advocated for a pause across industry-wide AI development following OpenAIs release of the more advanced GPT-4 and signed a Future of Life Institute petition which garnered more than 26, 000 signatures.

He has since moved ahead with his own AI plans.

In his April 11 Twitter Spaces, Musk confirmed that Twitter is now at less than one-fifth its pre-acquisition size, down from a workforce of just under 8,000 last October to 1,500 today. He said at the time of acquisition, Twitter was tracking to lose over $3 billion a year. With just $1 billion in the bank, thats only four months of runway, he explained.

He recently valued the company at $20 billion, less than half of what he paid and said he regretted needing to sell a lot of Tesla stock to close the deal because he knew he overpaid. Although he acknowledged its been a rough start, he now feels the company has since turned a corner.

Were roughly break-even at this point and could be cash-flow positive this quarter if things go well. He also said most advertisers have come back. As for legacy verification badges, he said they are being removed next week, after delaying deletion on April Fools Day. Hes pushing hard for paid verification as he fast-tracks pivoting Twitter into an everything app, with payments.

On Apr. 13, eToro announced its partnership with Twitter by tweeting that users should start seeing real-time prices for stocks and crypto with the option to invest.

Whether Twitter integrates GPT-models to drive commerce for AI-generated fashion which Musk is a fan of or finds a way to use the technology to defeat the spam bots that have been inundating the platform, Musk told listeners to stay tuned.

He said he has no plans to move Twitter out of San Francisco yet and would like to turn one of the Twitter buildings into a homeless shelter once the building owner lets them. He also said he wouldnt sell Twitter if someone offered him $44 billion now, unless it was someone who could keep the platform an immediate source of truth. Musk said the money doesnt matter to him.

According to the Forbes real-time billionaires list, Musk is the second wealthiest person in the world with a net worth of $187.9 billion, next to LVMH CEO Bernard Arnault and family with a net worth of $241.7 billion. Musk was the richest person in the world before he offered to buy Twitter a year ago.

Other top ten billionaires on the Forbes list include Amazon cofounder Jeff Bezos at $125.6 billion, Oracle cofounder Larry Ellison at $120.3 billion, Berkshire Hathaways Warren Buffet at $113.8 billion, Microsoft cofounder Bill Gates at $110.2 billion, telecom giant Carlos Slim Helu and family at $95.1 billion, Bloomberg Media cofounder Michael Bloomberg at $94.5 billion, Google cofounder Larry Page at $93.5 billion and Loreal heir Francoise Bettencourt Meyers and family at $92.5 billion, as of April 14 5pm ET.

Updated with additional comments from the Apr. 11 Twitter Spaces, Musk tweet on ChatGPT training on Twitter data, Twitters latest valuation and details from the Forbes real-time billionaires list.

Tech and trending reporter with bylines in Bloomberg, Businessweek, Fortune, Fast Company, Insider, TechCrunch and TIME; syndicated in leading publications around the world. Fox 5 DC commentator on consumer trends. Winner CES 2020 Media Trailblazer award. Follow on Twitter @contentnow.

Read more:

Elon Musk Launches X.AI To Fight ChatGPT Woke AI, Says Twitter Is Breakeven - Forbes

Posted in Ai

These are the tech jobs most threatened by ChatGPT and A.I. – CNBC

As if there weren't already enough layoff fears in the tech industry, add ChatGPT to the list of things workers are worrying about, reflecting the advancement of this artificial intelligence-based chatbot trickling its way into the workplace.

So far this year, the tech industry already has cut 5% more jobs than it did in all of 2022, according to Challenger, Gray & Christmas.

The rate of layoffs is on track to pass the job loss numbers of 2001, the worst year for tech layoffs due to the dot-com bust.

As layoffs continue to mount, workers are not only scared of being laid off, they're scared of being replaced all together. A recent Goldman Sachs report found 300 million jobs around the world stand to be impacted by AI and automation.

But ChatGPT and AI shouldn't ignite fear among employees because these tools will help people and companies work more efficiently, according to Sultan Saidov, co-founder and president of Beamery, a global human capital management software-as-a-service company, which has its own GPT, or generative pretrained transformer, called TalentGPT.

"It's already being estimated that 300 million jobs are going to be impacted by AI and automation," Saidov said. "The question is: Does that mean that those people will change jobs or lose their jobs? I think, in many cases, it's going to be changed rather than lose."

ChatGPT is one type of GPT tool that uses learning models to generate human-like responses, and Saidov says GPT technology can help workers do more than just have conversations. Especially in the tech industry, specific jobs stand to be impacted more than others.

Saidov points to creatives in the tech industry, like designers, video game creators, photographers, and those who create digital images, as those whose jobs will likely not be completely eradicated. It will help these roles create more and do their jobs quicker, he said.

"If you look back to the industrial revolution, when you suddenly had automation in farming, did it mean fewer people were going to be doing certain jobs in farming?" Saidov said. "Definitely, because you're not going to need as many people in that area, but it just means the same number of people are going to different jobs."

Just like similar trends in history, creative jobs will be in demand after the widespread inclusion of generative AI and other AI tech in the workplace.

"With video game creators, if the number of games made globally doesn't change year over year, you'll probably need fewer game designers," Saidov said. "But if you can create more as a company, then this technology will just increase the number of games you'll be able to get made."

Due to ChatGPT buzz, many software developers and engineers are apprehensive about their job security, causing some to seek new skills and learn how to engineer generative AI and add these skills to their resume.

"It's unfair to say that GPT will completely eliminate jobs, like developers and engineers," says Sameer Penakalapati, chief executive officer at Ceipal, an AI-driven talent acquisition platform.

But even though these jobs will still exist, their tasks and responsibilities could likely be diminished by GPT and generative AI.

There's an important distinction to be made between GPT specifically and generative AI more broadly when it comes to the job market, according to Penakalapati. GPT is a mathematical or statistical model designed to learn patterns and provide outcomes. But other forms of generative AI can go further, reconstructing different outcomes based on patterns and learnings, and almost mirroring a human brain, he said.

As an example, Penakalapati says if you look at software developers, engineers, and testers, GPT can generate code in a matter of seconds, giving software users and customers exactly what they need without the back and forth of relaying needs, adaptations, and fixes to the development team. GPT can do the job of a coder or tester instantly, rather than the days or weeks it may take a human to generate the same thing, he said.

Generative AI can more broadly impact software engineers, and specifically devops (development and operations) engineers, Penakalapati said, from the development of code to deployment, conducting maintenance, and making updates in software development. In this broader set of tasks, generative AI can mimic what an engineer would do through the development cycle.

While development and engineering roles are quickly adapting to these tools in the workplace, Penakalapati said it'll be impossible for the tools to totally replace humans. More likely we'll see a decrease in the number of developers and engineers needed to create a piece of software.

"Whether it's a piece of code you're writing, whether you're testing how users interact with your software, or whether you're designing software and choosing certain colors from a color palette, you'll always need somebody, a human, to help in the process," Penakalapati said.

While GPT and AI will heavily impact more roles than others, the incorporation of these tools will impact every knowledge worker, commonly referred to as anyone who uses or handles information in their job, according to Michael Chui, a partner at the McKinsey Global Institute.

"These technologies enable the ability to create first drafts very quickly, of all kinds of different things, whether it's writing, generating computer code, creating images, video, and music," Chui said. "You can imagine almost any knowledge worker being able to benefit from this technology and certainly the technology provides speed with these types of capabilities."

A recent study by OpenAI, the creator of ChatGPT, found that roughly 80% of the U.S. workforce could have at least 10% of their work tasks affected by the introduction of learning models in GPT tech, while roughly 19% of workers might see 50% of their tasks impacted.

Chui said workers today can't remember a time when they didn't have tools like Microsoft Excel or Microsoft Word, so, in some ways, we can predict that workers in the future won't be able to imagine a world of work without AI and GPT tools.

"Even technologies that greatly increased productivity, in the past, didn't necessarily lead to having fewer people doing work," Chui said. "Bottom line is the world will always need more software."

Go here to see the original:

These are the tech jobs most threatened by ChatGPT and A.I. - CNBC

Posted in Ai

How artificial intelligence is matching drugs to patients – BBC

17 April 2023

Image source, Natalie Lisbona

Dr Talia Cohen Solal, left, is using AI to help her and her team find the best antidepressants for patients

Dr Talia Cohen Solal sits down at a microscope to look closely at human brain cells grown in a petri dish.

"The brain is very subtle, complex and beautiful," she says.

A neuroscientist, Dr Cohen Solal is the co-founder and chief executive of Israeli health-tech firm Genetika+.

Established in 2018, the company says its technology can best match antidepressants to patients, to avoid unwanted side effects, and make sure that the prescribed drug works as well as possible.

"We can characterise the right medication for each patient the first time," adds Dr Cohen Solal.

Genetika+ does this by combining the latest in stem cell technology - the growing of specific human cells - with artificial intelligence (AI) software.

From a patient's blood sample its technicians can generate brain cells. These are then exposed to several antidepressants, and recorded for cellular changes called "biomarkers".

This information, taken with a patient's medical history and genetic data, is then processed by an AI system to determine the best drug for a doctor to prescribe and the dosage.

Although the technology is currently still in the development stage, Tel Aviv-based Genetika+ intends to launch commercially next year.

Image source, Getty Images

The global pharmaceutical sector had revenues of $1.4 trillion in 2021

An example of how AI is increasingly being used in the pharmaceutical sector, the company has secured funding from the European Union's European Research Council and European Innovation Council. Genetika+ is also working with pharmaceutical firms to develop new precision drugs.

"We are in the right time to be able to marry the latest computer technology and biological technology advances," says Dr Cohen Solal.

A senior lecturer of biomedical AI and data science at King's College London, she says that AI has so far helped with everything "from identifying a potential target gene for treating a certain disease, and discovering a new drug, to improving patient treatment by predicting the best treatment strategy, discovering biomarkers for personalised patient treatment, or even prevention of the disease through early detection of signs for its occurrence".

New Tech Economy is a series exploring how technological innovation is set to shape the new emerging economic landscape.

Yet fellow AI expert Calum Chace says that the take-up of AI across the pharmaceutical sector remains "a slow process".

"Pharma companies are huge, and any significant change in the way they do research and development will affect many people in different divisions," says Mr Chace, who is the author of a number of books about AI.

"Getting all these people to agree to a dramatically new way of doing things is hard, partly because senior people got to where they are by doing things the old way.

"They are familiar with that, and they trust it. And they may fear becoming less valuable to the firm if what they know how to do suddenly becomes less valued."

However, Dr Sailem emphasises that the pharmaceutical sector shouldn't be tempted to race ahead with AI, and should employ strict measures before relying on its predictions.

"An AI model can learn the right answer for the wrong reasons, and it is the researchers' and developers' responsibility to ensure that various measures are employed to avoid biases, especially when trained on patients' data," she says.

Hong Kong-based Insilico Medicine is using AI to accelerate drug discovery.

"Our AI platform is capable of identifying existing drugs that can be re-purposed, designing new drugs for known disease targets, or finding brand new targets and designing brand new molecules," says co-founder and chief executive Alex Zhavoronkov.

Image source, Insilico Medicine

Alex Zhavoronkov says that using AI is helping his firm to develop new drugs more quickly than would otherwise be the case

Its most developed drug, a treatment for a lung condition called idiopathic pulmonary fibrosis, is now being clinically trialled.

Mr Zhavoronkov says it typically takes four years for a new drug to get to that stage, but that thanks to AI, Insilico Medicine achieved it "in under 18 months, for a fraction of the cost".

He adds that the firm has another 31 drugs in various stages of development.

Back in Israel, Dr Cohen Solal says AI can help "solve the mystery" of which drugs work.

See the original post:

How artificial intelligence is matching drugs to patients - BBC

Posted in Ai

Adobe Lightroom AI Feature Tackles a Massive Problem With Photos – CNET

With an update Tuesday to its Lightroom software, Adobe has applied AI technology to one of the most persistent problems of digital photography: multicolored speckles of image noise. It's not always perfect, but it works and sometimes can salvage otherwise terrible photos.

Digital photos taken in dim conditions are often plagued with noise, especially when you need a fast shutter speed to avoid blur with moving subjects. But Adobe trained an artificial intelligence model to clean up photos, adding it as a new feature called denoise.

It's a notable example of how AI can breathe new life into older software and services. Microsoft, Google and other companies have the same idea with improvements planned for tools like searching with Bing, writing with Word and drafting emails with Gmail.

I've been trying Adobe's denoise AI feature in a prerelease version of Lightroom and can confirm it works, in some cases impressively. It rescued portraits by smoothing skin while preserving hair detail in photos I took at dawn with my DSLR at a very high ISO 25,600 sensitivity setting.

A shot of my mom lit only by birthday candle light likewise was significantly improved. I also found it useful on photos of birds, wooden carvings in dim European cathedrals and Comet Neowise in the night sky in 2020. It's particularly useful for improving photos that I'll never be able to reproduce, like a shot of my young son reading an ebook in the dark, lit only by the glow of a phone screen.

It's not perfect. Skin can look plasticky and artificially smooth, especially if you crank up the noise removal slider too far. Sometimes it seemed to inject a sort of motion blur detail. Pairs of thin cables stabilizing San Francisco's Sutro Tower were distorted into wispy streamers.

Based on my early tests, though, I think Lightroom's denoise feature is useful enough to make photographers feel more comfortable shooting at high ISO and to give them more latitude in editing, for example brightening shadowy areas of photos. And Lightroom's denoise feature is built straight into Lightroom.

"Our overall goal right now is to make it really easy for anyone to edit photos like a pro, so that they can really achieve their creative vision," said Rob Christensen, Adobe's product director for Lightroom. "AI is a true enabler for that."

Lightroom's AI-powered denoise feature was able to cut noise while preserving details in this bird's plumage.

Lightroom isn't the first to embrace AI for noise reduction. Topaz DeNoise and the newer Photo AI from Topaz Labs has attracted a following, for example, among bird photographers who routinely struggle with high noise that often accompanies high shutter speeds. Photo AI also has AI-based sharpening tools that Adobe's Lightroom and Photoshop lack.

Google, an AI and computational photography leader, uses AI to reduce noise when its Pixel phones use Night Sight to take shots in the dark. And DxO's PureRawand PhotoLab software has used AI denoising technology since 2020.

Artificial intelligence technology today typically refers to systems that are trained to recognize patterns in complex real-world data. For the denoise tool, Adobe created pairs of millions of photos consisting of a low-noise original and a version with artificial noise added. Although Adobe generated the noise artificially, the company based it on real-world noise profiles from actual cameras, Adobe engineer and fellow Eric Chan said in a blog post.

"With enough examples covering all kinds of subject matter, the model eventually learns to denoise real photos in a natural yet detailed manner," Chan said.

The denoise tool has some limitations. It works only on raw images, though JPEG support is in the works, Christensen said. And it doesn't yet support all cameras, including raw shots from Apple iPhones and Samsung Galaxy phones I tested. My Pixel 7 Pro's raw images worked, though.

Another caveat: The denoise tool creates a new DNG image. That's because it creates new pixel-level detail, Christensen said. It's not a reversible change like most of what you can do with Lightroom's nondestructive editing process.

Most photographers testing the denoise tool prefer to use it early in the editing process, Christensen said. That makes sense to me, since editing choices like boosting brightness in shadowy areas can be limited by noise.

If you prefer Lightroom's earlier tools, they're still available in a "manual noise reduction" section below the new denoise button. The denoise tool is available in Lightroom and Lightroom Classic, where it takes advantage of AI acceleration hardware built into newer processors, but not on the mobile versions for phones and tablets.

The new version of Lightroom adds some other tricks:

See more here:

Adobe Lightroom AI Feature Tackles a Massive Problem With Photos - CNET

Posted in Ai

Impact of AI on higher education panel event May 3 – Boise State University

The emergence of free, powerful and easy-to-use generative artificial intelligence (AI) has caused significant disruption in higher education as institutions and educators ponder the implications of student access to tools such as DALL-E and ChatGPT. Many argue that this is only the beginning of what will be a significant reshaping of higher education, as sweeping as those that have been affected in the past by computers, the Internet and social media.

A panel of Boise State faculty from across disciplines will lead a discussion of how AI will affect the landscape of higher education from noon-1 p.m. on May 3 in the College of Innovation and Design space on the second floor of Albertsons Library. The Center for Teaching and Learning, College of Innovation and Design, and the AI in Education Task Force invite participants to bring their lunch along with questions, concerns, insights and areas of excitement to the conversation.

Visit the Center for Teaching and Learning event calendar to register. Registration will be limited to the first 65 participants.

Panelists:

Link:

Impact of AI on higher education panel event May 3 - Boise State University

Posted in Ai

European parliament prepares tough measures over use of AI – Financial Times

What is included in my trial?

During your trial you will have complete digital access to FT.com with everything in both of our Standard Digital and Premium Digital packages.

Standard Digital includes access to a wealth of global news, analysis and expert opinion. Premium Digital includes access to our premier business column, Lex, as well as 15 curated newsletters covering key business themes with original, in-depth reporting. For a full comparison of Standard and Premium Digital, click here.

Change the plan you will roll onto at any time during your trial by visiting the Settings & Account section.

If you do nothing, you will be auto-enrolled in our premium digital monthly subscription plan and retain complete access for $69 per month.

For cost savings, you can change your plan at any time online in the Settings & Account section. If youd like to retain your premium access and save 20%, you can opt to pay annually at the end of the trial.

You may also opt to downgrade to Standard Digital, a robust journalistic offering that fulfils many users needs. Compare Standard and Premium Digital here.

Any changes made can be done at any time and will become effective at the end of the trial period, allowing you to retain full access for 4 weeks, even if you downgrade or cancel.

You may change or cancel your subscription or trial at any time online. Simply log into Settings & Account and select "Cancel" on the right-hand side.

You can still enjoy your subscription until the end of your current billing period.

We support credit card, debit card and PayPal payments.

Read more from the original source:

European parliament prepares tough measures over use of AI - Financial Times

Posted in Ai

OpenAIs CEO Says the Age of Giant AI Models Is Already Over – WIRED

The stunning capabilities ofChatGPT, the chatbot from startup OpenAI, has triggered a surge of new interest and investment inartificial intelligence. But late last week, OpenAIs CEO warned that the research strategy that birthed the bot is played out. It's unclear exactly where future advances will come from.

View more

OpenAI has delivered a series of impressive advances in AI that works with language in recent years by taking existing machine-learning algorithms and scaling them up to previously unimagined size. GPT-4, the latest of those projects, was likely trained using trillions of words of text and many thousands of powerful computer chips. The process cost over $100 million.

View more

But the companys CEO, Sam Altman, says further progress will not come from making models bigger. I think we're at the end of the era where it's going to be these, like, giant, giant models, he told an audience at an event held at MIT late last week. We'll make them better in other ways.

Altmans declaration suggests an unexpected twist in the race to develop and deploy new AI algorithms. Since OpenAI launched ChatGPT in November, Microsoft has used the underlying technology to add a chatbot to its Bing search engine, and Google has launched a rival chatbot called Bard. Many people have rushed to experiment with using the new breed of chatbot to help with work or personal tasks.

Meanwhile, numerous well-funded startups, includingAnthropic,AI21,Cohere, andCharacter.AI, are throwing enormous resources into building ever larger algorithms in an effort to catch up with OpenAIs technology. The initial version of ChatGPT was based on a slightly upgraded version of GPT-3, but users can now also access a version powered by the more capable GPT-4.

Altmans statement suggests that GPT-4 could be the last major advance to emerge from OpenAIs strategy of making the models bigger and feeding them more data. He did not say what kind of research strategies or techniques might take its place. In the paper describing GPT-4, OpenAI says its estimates suggest diminishing returns on scaling up model size. Altman said there are also physical limits to how many data centers the company can build and how quickly it can build them.

Nick Frosst, a cofounder at Cohere who previously worked on AI at Google, says Altmans feeling that going bigger will not work indefinitely rings true. He, too, believes that progress on transformers, the type of machine learning model at the heart of GPT-4 and its rivals, lies beyond scaling. There are lots of ways of making transformers way, way better and more useful, and lots of them dont involve adding parameters to the model, he says. Frosst says that new AI model designs, or architectures, and further tuning based on human feedback are promising directions that many researchers are already exploring.

Each version of OpenAIs influential family of language algorithms consists of an artificial neural network, software loosely inspired by the way neurons work together, which is trained to predict the words that should follow a given string of text.

The first of these language models, GPT-2, wasannounced in 2019. In its largest form, it had 1.5 billion parameters, a measure of the number of adjustable connections between its crude artificial neurons.

At the time, that was extremely large compared to previous systems, thanks in part to OpenAI researchers finding that scaling up made the model more coherent. And the company made GPT-2s successor, GPT-3,announced in 2020, still bigger, with a whopping 175 billion parameters. That systems broad abilities to generate poems, emails, and other text helped convince other companies and research institutions to push their own AI models to similar and even greater size.

After ChatGPT debuted in November, meme makers andtech pundits speculated that GPT-4, when it arrived, would be a model of vertigo-inducing size and complexity. Yet whenOpenAI finally announced the new artificial intelligence model, the company didnt disclose how big it isperhaps because size is no longer all that matters. At the MIT event, Altman was asked if training GPT-4 cost $100 million; he replied, Its more than that.

Although OpenAI is keeping GPT-4s size and inner workings secret, it is likely that some of its intelligence already comes from looking beyond just scale. On possibility is that it used a method called reinforcement learning with human feedback, which was used to enhance ChatGPT. It involves having humans judge the quality of the models answers to steer it towards providing responses more likely to be judged as high quality.

The remarkable capabilities of GPT-4 have stunned some experts and sparked debate over the potential for AI to transform the economy but also spread disinformation and eliminate jobs. Some AI experts, tech entrepreneurs including Elon Musk, and scientists recently wrote an open letter calling for a six-month pause on the development of anything more powerful than GPT-4.

At MIT last week, Altman confirmed that his company is not currently developing GPT-5.An earlier version of the letter claimed OpenAI is training GPT-5 right now, he said. We are not, and won't for some time.

Read the original post:

OpenAIs CEO Says the Age of Giant AI Models Is Already Over - WIRED

Posted in Ai

Reddit Wants to Get Paid for Helping to Teach Big A.I. Systems – The New York Times

Reddit has long been a hot spot for conversation on the internet. About 57 million people visit the site every day to chat about topics as varied as makeup, video games and pointers for power washing driveways.

In recent years, Reddits array of chats also have been a free teaching aid for companies like Google, OpenAI and Microsoft. Those companies are using Reddits conversations in the development of giant artificial intelligence systems that many in Silicon Valley think are on their way to becoming the tech industrys next big thing.

Now Reddit wants to be paid for it. The company said on Tuesday that it planned to begin charging companies for access to its application programming interface, or A.P.I., the method through which outside entities can download and process the social networks vast selection of person-to-person conversations.

The Reddit corpus of data is really valuable, Steve Huffman, founder and chief executive of Reddit, said in an interview. But we dont need to give all of that value to some of the largest companies in the world for free.

The move is one of the first significant examples of a social networks charging for access to the conversations it hosts for the purpose of developing A.I. systems like ChatGPT, OpenAIs popular program. Those new A.I. systems could one day lead to big businesses, but they arent likely to help companies like Reddit very much. In fact, they could be used to create competitors automated duplicates to Reddits conversations.

Reddit is also acting as it prepares for a possible initial public offering on Wall Street this year. The company, which was founded in 2005, makes most of its money through advertising and e-commerce transactions on its platform. Reddit said it was still ironing out the details of what it would charge for A.P.I. access and would announce prices in the coming weeks.

Reddits conversation forumshave become valuable commodities as large languagemodels, or L.L.M.s, have become an essential part of creating new A.I. technology.

A brave new world. A new crop of chatbotspowered by artificial intelligence has ignited a scramble to determine whether the technology could upend the economics of the internet, turning todays powerhouses into has-beens and creating the industrys next giants. Here are the bots to know:

ChatGPT. ChatGPT, the artificial intelligence language model from a research lab, OpenAI, has been making headlines since November for its ability to respond to complex questions, write poetry, generate code, plan vacationsand translate languages. GPT-4, the latest version introduced in mid-March, can even respond to images(and ace the Uniform Bar Exam).

Bing. Two months after ChatGPTs debut, Microsoft, OpenAIs primary investor and partner, added a similar chatbot, capable of having open-ended text conversations on virtually any topic, to its Bing internet search engine. But it was the bots occasionally inaccurate, misleading and weird responsesthat drew much of the attention after its release.

Ernie. The search giant Baidu unveiled Chinas first major rival to ChatGPT in March. The debut of Ernie, short for Enhanced Representation through Knowledge Integration, turned out to be a flopafter a promised live demonstration of the bot was revealed to have been recorded.

L.L.M.s are essentially sophisticated algorithms developed by companies like Google and OpenAI, which is a close partner of Microsoft. To the algorithms, the Reddit conversations are data, and they are among the vast pool of material being fed into the L.L.M.s. to develop them.

The underlying algorithm that helped to build Bard, Googles conversational A.I. service, is partly trained on Reddit data. OpenAIs Chat GPT cites Reddit data as one of the sources of information it has been trained on.

Other companies are also beginning to see value in the conversations and images they host. Shutterstock, the image hosting service, also sold image data to OpenAI to help create DALL-E, the A.I. program that creates vivid graphical imagery with only a text-based prompt required.

Last month, Elon Musk, the owner of Twitter, said he was cracking down on the use of Twitters A.P.I., which thousands of companies and independent developers use to track the millions of conversations across the network. Though he did not cite L.L.M.s as a reason for the change, the new fees could go well into the tens or even hundreds of thousands of dollars.

To keep improving their models, artificial intelligence makers need two significant things: an enormous amount of computing power and an enormous amount of data. Some of the biggest A.I. developers have plenty of computing power but still look outside their own networks for the data needed to improve their algorithms. That has included sources like Wikipedia, millions of digitized books, academic articles and Reddit.

Representatives from Google, Open AI and Microsoft did not immediately respond to a request for comment.

Reddit has long had a symbiotic relationship with the search engines of companies like Google and Microsoft. The search engines crawl Reddits web pages in order to index information and make it available for search results. That crawling, or scraping, isnt always welcome by every site on the internet. But Reddit has benefited by appearing higher in search results.

The dynamic is different with L.L.M.s they gobble as much data as they can to create new A.I. systems like the chatbots.

Reddit believes its data is particularly valuable because it is continuously updated. That newness and relevance, Mr. Huffman said, is what large language modeling algorithms need to produce the best results.

More than any other place on the internet, Reddit is a home for authentic conversation, Mr. Huffman said. Theres a lot of stuff on the site that youd only ever say in therapy, or A.A., or never at all.

Mr. Huffman said Reddits A.P.I. would still be free to developers who wanted to build applications that helped people use Reddit. They could use the tools to build a bot that automatically tracks whether users comments adhere to rules for posting, for instance. Researchers who want to study Reddit data for academic or noncommercial purposes will continue to have free access to it.

Reddit also hopes to incorporate more so-called machine learning into how the site itself operates. It could be used, for instance, to identify the use of A.I.-generated text on Reddit, and add a label that notifies users that the comment came from a bot.

The company also promised to improve software tools that can be used by moderators the users who volunteer their time to keep the sites forums operating smoothly and improve conversations between users. And third-party bots that help moderators monitor the forums will continue to be supported.

But for the A.I. makers, its time to pay up.

Crawling Reddit, generating value and not returning any of that value to our users is something we have a problem with, Mr. Huffman said. Its a good time for us to tighten things up.

We think thats fair, he added.

Read more:

Reddit Wants to Get Paid for Helping to Teach Big A.I. Systems - The New York Times

Posted in Ai

Philips Future Health Index shows providers plan to invest in AI – Healthcare Finance News

CHICAGO The Philips Future Health Index 2023 global report released here at HIMSS23 today shows healthcare leaders are focused on addressing staffing shortages and stepping up planned AI investments.

The investments are to increase critical decision support and operational efficiency that will also help tackle staffing shortages.

First and foremost, providers are concerned with staffing shortages and are looking to right-size the issue through AI and machine learning that will help them do more with less, according to Shez Partovi, chief innovation and strategy officer and business leader, Enterprise Informatics, at Philips.

It shows virtual care continues to be a key area for patient access.

"The second thing we saw was, coming out of the pandemic, there continues to be a big desire to use virtual care delivery for quality access and cost of care," Partovi said. "The third thing, we're stronger together; individuals signaled to us that they see building partnerships with health system partners and with tech partners as a way of addressing the other two items in improving access to care."

WHY THIS MATTERS

Access to care, and not just in the hospital setting, has been among the themes to emerge from the HIMSS23 Global Health Conference & Exhibition.

Kicking off Monday's Executive Summit,HIMSS President and CEO Hal Wolf told a ballroom of C-suite leaders that healthcare is "inside-out" that is, no longer happening inside the four walls of a hospital.

The Philips report also shows the broadening of access points, with 82% of respondents talking about virtual intensive care, Partovi said. Ambulatory sites such as walk-in clinicsare also increasing access.

"People are investing in the broadening of access points," Partovi said.

Somewhat surprising, he said, is that the report shows an increased willingness to partner to improve care. Thirty-four percent of respondents said they are in favor of partnerships and collaboration to improve care. This number climbed to 43% for younger respondents.

Other healthcare IT experts at HIMSS23 have also talked of a new willingness to collaborate and share data.

There's less of proprietary competition and more of willingness to say, "'How can we do this together?'" said John Halamaka, president of Mayo Clinic Platform, during Monday's Executive Summit.

Said Partovi: "It signals the direction we're going in healthcare."

THE LARGER TREND

Royal Philips is a Dutch multinational conglomerate and a health technology company.

The eighth annual Future Health Index 2023 report,"Taking healthcare everywhere," is based on proprietary research among nearly 3,000 healthcare leaders and younger healthcare professionals conducted in 14 countries.

It shows providers plan investments in AI over the next three years with the biggest increase in critical decision support (39% in 2023, up from 24% in 2021). This was a top choice among cardiology (50%) and radiology (48%) leaders.

The percentage of healthcare leaders planning to invest in AI for operational efficiency, including automating documentation, scheduling patients and performing routine tasks, remained steady at 37%.

Twitter: @SusanJMorseEmail the writer: SMorse@himss.org

Zenobia Brown will offer more detail in the HIMSS23 session "Views from the Top: Can Technology and Innovation Advance Behavioral Healthcare?" It is scheduled for Tuesday, April 18, at 10:30 a.m. - 11:30 a.m. CT at the South Building, Level 1, room S100 B.

See the article here:

Philips Future Health Index shows providers plan to invest in AI - Healthcare Finance News

Posted in Ai

Military Tech Execs Tell Congress an AI Pause Is ‘Close to Impossible’ – Gizmodo

Military tech executives and experts speaking before the Senate Armed Services Committee Wednesday said growing calls for a pause on new artificial intelligence systems were misguided and seemed close to impossible to enact. The experts, who spoke on behalf of two military tech companies as well as storied defense contractor the Rand Corporation, said Chinese AI makers likely wouldnt adhere to a pause and would instead capitalize on a development lull to usurp the United States current lead in the international AI race. The world is at an AI inflection point, one expert said, and its time to step on the gas to terrify our adversaries.

Generating Video Via Text? | Future Tech

I think it would be very difficult to broker an international agreement to hit pause on AI development that would actually be verifiable, Rand Corporation President and CEO Jason Matheny said during the Senate hearing Wednesday. Shyam Sankar, the CTO of Peter Thiel-founded analytics firm Palantir, agreed, saying a pause on AI development in the US could pave the way for China to set the international standards around AI use and development. If that happens, Sankar said he feared Chinas recent regulatory guidelines prohibiting AI models from serving up content critical of the government could potentially spread to other countries.

To the extent those standards become the standards for the world is highly problematic, Sankar said. A Democratic AI is crucial.

Those dramatic warnings come just one month after hundreds of leading AI experts sent a widely read open letter calling for AI labs to impose an immediate six-month pause on training any AI systems more powerful than OpenAIs recently released GPT-4. Prior to that, human rights organizations have spent years advocating for binding treaties or other measures intended to restrict autonomous weapons development. The experts speaking before the Senate Armed Services Committee agreed it was paramount for the US to implement smart regulations guiding AIs development but warned a full-on pause would do more harm than to good to the Department of Defense, which has historically struggled to stay ahead of AI innovations.

Sankar, who spoke critically of the militarys relatively cautious approach when it came to adopting new technology, told lawmakers its currently easier for his company to bring advanced AI tools to banking giant AIG than to the Army or Airforce. The Palantir CTO contrasted that sluggish adoption with Ukraines military, which he said learned to procure new software in days in just days or weeks in order to fight off invading Russian forces. Palantir CEO Alex Karp has previously said his company offered services to the Ukrainian military.

Unsurprisingly, Sankar said he would like to see the DoD spend even more of its colossal $768 billion budget on tech solutions like those offered by Palantir.

If we want to effectively deter those that threaten US interests, we must spend at least 5% of our budget on capabilities that will terrify our adversaries, Sankar told the lawmakers.

Others, like Shift5 Co-founder and CEO Josh Lospinoso said the military is missing out on opportunities to use data already being created by its armada of ships, tanks, boats, and planes. That data, Lospinoso said, could be used to train powerful new AI systems that could give the US military an edge and bolster its cybersecurity defenses. Instead, most of it currently evaporates in the ether right away.

These machines are talking, but the DoD is unable to hear them, Lospinoso said, Americas weapons systems are simply not AI ready.

Maintaining the militarys competitive edge may also rely on shoring up data generated by private US tech companies. Matheny spoke critically of open-spruced AI companies and wanted that well-intentioned pursuit of free-flowing information could inadvertently wind up aiding military AI systems in other countries. Similarly, other AI tools believed to be benign by US tech firms could be misused by others. Matheny said AI tools above a certain, unspecified threshold probably should be allowed to be sold to foreign governments and should have some guardrails put in the palace before they are released to the public.

In some cases, the experts said the US military should consider going a step further and engage in offensive actions to limit a foreign militarys ability to develop superior AI systems. While those offensive actions could look like trade restrictions or sanctions on high-tech equipment, Lospinoso and Matheny said the US could also consider going a step further a poisoning an adversarys data. Intentionally manipulating or corrupting datasets used to train military AI models, in theroy at least, could buy the Pentagon more time to build out its own.

Want to know more about AI, chatbots, and the future of machine learning? Check out our full coverage of artificial intelligence, or browse our guides to The Best Free AI Art Generators and Everything We Know About OpenAIs ChatGPT.

See more here:

Military Tech Execs Tell Congress an AI Pause Is 'Close to Impossible' - Gizmodo

Posted in Ai

Dating an AI? Artificial Intelligence dating app founder predicts the future of AI relationships – Fox News

Replika CEO Eugenia Kuyda, the creator of an AI dating app with millions of users around the world, spoke to Fox News Digital about AI companion bots and the future of human and AI relationships.

It is an industry that she said will truly change peoples lives.

"I think it's the next big platform. I think it is going to be bigger than any other platform before that. I think it's going to be basically whatever the iPhone is for you right now."

Kuyda said that the technology still needs time to improve, but she predicted that people around the world will have access to chatbots that accompany them on trips and are intimately aware of their lives within 5 to 10 years.

40-YEAR-OLD MAN FALLS IN LOVE WITH AI, REPORTEDLY TELLS PHAEDRA ABOUT PLANS TO CREMATE MOTHER AND SISTER

Replika CEO Eugenia Kuyda, the creator of an AI companion app with millions of users around the world, spoke to Fox News Digital about AI companion bots and the future of human and AI relationships.

"[When] we started Replicant," Kuyda said, her vision was building a world "where I can walk to a coffee shop and Replika can walk next to me and I can look at her through my glasses or device. That's the point. Ubiquitous," Kuyda said.

Its a "dream product," Kuyda said, that most people, including herself, would benefit from.

AI companion bots will fill in the space where people "watch TV, play video games, lay on a couch, work out" and complain about life, she explained.

SNAPCHAT AI CHATBOT ALLEGEDLY GAVE ADVICE TO 13-YEAR-OLD GIRL ON RELATIONSHIP WITH 31-YEAR-OLD MAN, HAVING SEX

While people have different reasons for using Replika and creating an AI companion, Kuyda explained, they all have one thing in common: a desire for companionship. (Luka, Inc./Handout via REUTERS/File Photo)

Kuyda said that the idea for her company, which allows users to create, name and even personalize their own AI chatbots with different hairstyles and outfits, came after the death of her friend. As she went back through her text messages, the app developer used her skills to build a chatbot that would allow her to connect with her old friend.

In the process, she realized that she had discovered something significant: a potential for connection. The app has become a hit around the world, gaining over 10 million users, according to Replika's website.

"What we saw there, maybe for the first time," Kuyda said, was that "people were really resonated with the app."

"They were sharing their stories. They were being really vulnerable. They were open about their feelings," she continued.

But while people have different reasons for using Replika and creating an AI companion, Kuyda explained, they all have one thing in common: a desire for companionship. Thats exactly what Replika is designed for, Kuyda said.

"Replika helped them with certain aspects of their lives, whether it's going through a period of grief or understanding themselves better, or something as trivial as just improving their self-esteem, or maybe going through some hard times of dealing with their PTSD."

But the most significant possibility of AI companionship will encompass all aspects of life, Kuyda predicted. (Kurt Knutsson)

Kuyda argued that Replika was providing an important service for people who struggle, especially with loneliness.

"I mean, of course it would be wonderful if everyone had perfect lives and amazing relationships and never needed any support in a form of a therapist or an AI chatbot or anyone else. That would be the ideal situation for us, for people," Kuyda said.

"But unfortunately, we're not in this place. I think the situation is that there's a lot of loneliness in the world and it seems to kind of get worse over time. And so there needs to be solutions to that," she said.

AI AND LOVE: MAN DETAILS HIS HUMAN-LIKE RELATIONSHIP WITH A BOT

But Kuyda emphasized that the social media model of high engagement and constant advertising is not what she intends for Replika. One way of avoiding that model is by "nudging" users on Replika and preventing them from forming unhealthy attachments to chatbots.

That's because after roughly 50 messages, Kuyda explained, the Replika chat partner becomes "tired" and hints to the user that they should take a break from their conversation.

ITALY BANS POPULAR AI APP FROM COLLECTING USERS' DATA

Kuyda concluded with a hopeful message for the future of AI companion bots.

"I think there's a lot of fear because people are scared of the future and you know what the tech brings," she said.

But Kuyda pointed to happy and fulfilled stories from users as proof that there is hope for a future in AI can help people feel loved.

"People were bonding, people were creating connections, people were falling in love. People were feeling loved and worthy of love. I think overall that it says something really good about the potential of the technology, but also something really good about people."

CLICK HERE TO GET THE FOX NEWS APP

"To give someone a product that tells them that they can love someone and they are worthy of love I think this is just tapping into a gigantic void, into a space that's just asking to be filled. For so many people, it's just such a basic need, it's such a good thing that this technology can bring," Kuyda said.

Here is the original post:

Dating an AI? Artificial Intelligence dating app founder predicts the future of AI relationships - Fox News

Posted in Ai

Atlassian taps OpenAI to make its collaboration software smarter – CNBC

Scott Farquhar, co-founder and co-CEO of the software company Atlassian, speaks during a jobs and skills summit at Parliament House on September 1, 2022 in Canberra, Australia. The Australian government is bringing together political, business, union and community group leaders at Parliament House to address issues facing the Australian economy and workforce as inflation and interest rates continue to rise.

Martin Ollman | Getty Images

Atlassian on Wednesday said it will draw on technology from startup OpenAI to add artificial intelligence features to a slew of the collaboration software company's programs.

Several software companies have been mobilizing to capitalize on interest in a category called generative AI where machines can react to human input with information informed by loads of previous data ever since OpenAI's ChatGPT bot went viral last year with its ability to give human-like responses to written commands.

OpenAI's GPT-4 large language model, which been trained on extensive sources of text from the internet, will help Atlassian's Jira Service Management process employees' tech support inquiries in Slack. For example, an employee could type an inquiry about getting approval to view a file and the chatbot will make that possible, freeing up service agents for more challenging requests.

In Atlassian's Confluence collaboration program, workers will be able to click on terms they don't recognize in documents and find automatically generated explanations and links to relevant documents. They will also be able to type in questions and receive automated answers based on information stored in documents.

Atlassian has been building its own AI models for several years, but just started using OpenAI at the beginning of 2023. Together, these models create results that are unique to individual customers, with Atlassian's trove of data.

"We have a graph of work basically," Scott Farquhar, one of Atlassian's two founders and CEOs, told CNBC in an interview earlier this week. "I reckon we have one of the best ones in the world out there. It spans people doing stuff from design to development to test to deployment to project management to collaborating on stuff, too."

Microsoft, which is one of Atlassian's top rivals, is a large financial backer of OpenAI. Consequently, when GPT-4 responds to user input such as a request for information in a Confluence file, the underlying computing work happens in a cloud service run by Microsoft.

But Farquhar dismissed this concern, explaining that OpenAI won't be training its models on Atlassian's customer data, so Atlassian won't be necessarily making OpenAI better by giving it business.

The new features will be available under the brand Atlassian Intelligence. Customers can join a waiting list and the company will start inviting people from it over the next few months, a spokesperson said. Corporate users will only see the new features if their employers opt in.

Atlassian employees have been able to use the new Atlassian Intelligence features internally, and they have become popular, especially for those leading teams, Anu Bharadwaj, president of Atlassian, said. Bharadwaj said she appreciates the Confluence feature that lets her transform the style of content while writing it, and she finds it helpful when Atlassian Intelligence can identify the common thread across multiple products in development at the same time.

Bharadwaj said Atlassian hasn't figured out how much to charge for Atlassian Intelligence. Nor does she know how much money Atlassian will wind up paying OpenAI for GPT-4, because it isn't clear how heavily Atlassian customers will use the new features.

Farquhar said the data that companies already store in Atlassian will help its use of AI stand out.

"If you start at a company that's been using our Confluence or Jira products for 10 years, the day you start, you have access to all the information that's happened over the last 10 years," he said. That data makes for a knowledgeable "virtual teammate," he said.

In March, Microsoft's GitHub code storage subsidiary said that, thanks to a collaboration with OpenAI, it had started testing AI-generated messages to describe changes known as pull requests. GitHub said it would experiment with letting AI identify pull requests that lack software tests and suggest code for appropriate tests. Atlassian sells Bitbucket software where developers also work on pull requests. But Farquhar said Atlassian did not have any announcements about Bitbucket to discuss.

Duolingo, Morgan Stanley and Stripe are among the many companies in addition to Microsoft that have said they're integrating GPT-4.

WATCH: A.I. will change the profile of the workforce over time, says SVB MoffettNathanson's Sterling Auty

Read the original post:

Atlassian taps OpenAI to make its collaboration software smarter - CNBC

Posted in Ai

Will AI ever reach human-level intelligence? We asked 5 experts – The Conversation

Artificial intelligence has changed form in recent years.

What started in the public eye as a burgeoning field with promising (yet largely benign) applications, has snowballed into a more than US$100 billion industry where the heavy hitters Microsoft, Google and OpenAI, to name a few seem intent on out-competing one another.

The result has been increasingly sophisticated large language models, often released in haste and without adequate testing and oversight.

These models can do much of what a human can, and in many cases do it better. They can beat us at advanced strategy games, generate incredible art, diagnose cancers and compose music.

Theres no doubt AI systems appear to be intelligent to some extent. But could they ever be as intelligent as humans?

Theres a term for this: artificial general intelligence (AGI). Although its a broad concept, for simplicity you can think of AGI as the point at which AI acquires human-like generalised cognitive capabilities. In other words, its the point where AI can tackle any intellectual task a human can.

AGI isnt here yet; current AI models are held back by a lack of certain human traits such as true creativity and emotional awareness.

We asked five experts if they think AI will ever reach AGI, and five out of five said yes.

But there are subtle differences in how they approach the question. From their responses, more questions emerge. When might we achieve AGI? Will it go on to surpass humans? And what constitutes intelligence, anyway?

Here are their detailed responses:

If so, youll be interested in our free daily newsletter. Its filled with the insights of academic experts, written so that everyone can understand whats going on in the world. With the latest scientific discoveries, thoughtful analysis on political issues and research-based life tips, each email is filled with articles that will inform you and often intrigue you.

Get our newsletters

Editor and General Manager

Find peace of mind, and the facts, with experts. Add evidence-based articles to your news digest. No uninformed commentariat. Just experts. 90,000 of them have written for us. They trust us. Give it a go.

Get our newsletter

If you found the article you just read to be insightful, youll be interested in our free daily newsletter. Its filled with the insights of academic experts, written so that everyone can understand whats going on in the world. Each newsletter has articles that will inform and intrigue you.

Subscribe now

CEO | Editor-in-Chief

It helps you go deeper into key political issues and also introduces you to the diversity of research coming out of the continent. It's not about breaking news. It's not about unfounded opinions. The Europe newsletter is evidence-based expertise from European scholars, presented by myself in France, and two of my colleagues in Spain and the UK.

Get our newsletter

Head of English section, France edition

View post:

Will AI ever reach human-level intelligence? We asked 5 experts - The Conversation

Posted in Ai