AI and You: OpenAI’s Sora Previews Text-to-Video Future, First Ivy League AI Degree – CNET

AI developments are happening pretty fast. If you don't stop and look around once in a while, you could miss them.

Fortunately, I'm looking around for you and what I saw this week is that competition between OpenAI, maker of ChatGPT and Dall-E, and Google continues to heat up in a way that's worth paying attention to.

A week after updating its Bard chatbot and changing the name to Gemini, Google's DeepMind AI subsidiary previewed the next version of its generative AI chatbot. DeepMind told CNET's Lisa Lacy that Gemini 1.5 will be rolled out "slowly" to regular people who sign up for a wait list and will be available now only to developers and enterprise customers.

Gemini 1.5 Pro, Lacy reports, is "as capable as" the Gemini 1.0 Ultra model, which Google announced on Feb. 8. The 1.5 Pro model has a win rate -- a measurement of how many benchmarks it can outperform -- of 87% compared with the 1.0 Pro and 55% against the 1.0 Ultra. So the 1.5 Pro is essentially an upgraded version of the best available model now.

Gemini 1.5 Pro can ingest video, images, audio and text to answer questions, added Lacy. Oriol Vinyals, vice president of research at Google DeepMind and co-lead of Gemini, described 1.5 as a "research release" and said the model is "very efficient" thanks to a unique architecture that can answer questions by zeroing in on expert sources in that particular subject rather than seeking the answer from all possible sources.

Meanwhile, OpenAI announced a new text-to-video model called Sora that's capturing a lot of attention because of the photorealistic videos it's able to generate. Sora can "create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions." Following up on a promise it made, along with Google and Meta last week, to watermark AI-generated images and video, OpenAI says it's also creating tools to detect videos created with Sora so they can be identified as being AI generated.

Google and Meta have also announced their own gen AI text-to-video creators.

Sora, which means "sky" in Japanese, is also being called experimental, with OpenAI limiting access for now to so-called "red teamers," security experts and researchers who will assess the tool's potential harms or risks. That follows through on promises made as part of President Joe Biden's AI executive order last year, asking developers to submit the results of safety checks on their gen AI chatbots before releasing them publicly. OpenAI said it's also looking to get feedback on Sora from some visual artists, designers and filmmakers.

How do the photorealistic videos look? Pretty realistic. I agree with the The New York Times, which described the short demo videos -- "of wooly mammoths trotting through a snowy meadow, a monster gazing at a melting candle and a Tokyo street scene seemingly shot by a camera swooping across the city" -- as "eye popping."

The MIT Review, which also got a preview of Sora, said the "tech has pushed the envelope of what's possible with text-to-video generation." Meanwhile, The Washington Post noted Sora could exacerbate an already growing problem with video deepfakes, which have been used to "deceive voters" and scam consumers.

One X commentator summarized it this way: "Oh boy here we go what is real anymore." And OpenAI CEO Sam Altman called the news about its video generation model a "remarkable moment."

You can see the four examples of what Sora can produce on OpenAI's intro site, which notes that the tool is "able to generate complex scenes with multiple characters, specific types of motion, and accurate details of the subject and background. The model understands not only what the user has asked for in the prompt, but also how those things exist in the physical world. The model has a deep understanding of language, enabling it to accurately interpret prompts and generate compelling characters that express vibrant emotions."

But Sora has its weaknesses, which is why OpenAI hasn't yet said whether it will actually be incorporated into its chatbots. Sora "may struggle with accurately simulating the physics of a complex scene and may not understand specific instances of cause and effect. For example, a person might take a bite out of a cookie, but afterward, the cookie may not have a bite mark. The model may also confuse spatial details of a prompt, for example, mixing up left and right."

All of this is to remind us that tech is a tool -- and that it's up to us humans to decide how, when, where and why to use that technology. In case you didn't see it, the trailer for the new Minions movie (Despicable Me 4: Minion Intelligence) makes this point cleverly, with its sendup of gen AI deepfakes and Jon Hamm's voiceover of how "artificial intelligence is changing how we see the worldtransforming the way we do business."

"With artificial intelligence," Hamm adds over the minions' laughter, "the future is in good hands."

Here are the other doings in AI worth your attention.

Twenty tech companies, including Adobe, Amazon, Anthropic, ElevenLabs, Google, IBM, Meta, Microsoft, OpenAI, Snap, TikTok and X, agreed at a security conference in Munich that they will voluntarily adopt "reasonable precautions" to guard against AI tools being used to mislead or deceive voters ahead of elections.

"The intentional and undisclosed generation and distribution of Deceptive AI Election content can deceive the public in ways that jeopardize the integrity of electoral processes," the text of the accord says, according to NPR. "We affirm that the protection of electoral integrity and public trust is a shared responsibility and a common good that transcends partisan interests and national borders."

But the agreement is "largely symbolic," the Associated Press reported, noting that "reasonable precautions" is a little vague.

"The companies aren't committing to ban or remove deepfakes," the AP said. "Instead, the accord outlines methods they will use to try to detect and label deceptive AI content when it is created or distributed on their platforms. It notes the companies will share best practices with each other and provide 'swift and proportionate responses' when that content starts to spread."

AI has already been used to try to trick voters in the US and abroad. Days before the New Hampshire presidential primary, fraudsters sent an AI robocall that mimicked President Biden's voice, asking them not to vote in the primary. That prompted the Federal Communications Commission this month to make AI-generated robocalls illegal. The AP said that "Just days before Slovakia's elections in November, AI-generated audio recordings impersonated a candidate discussing plans to raise beer prices and rig the election. Fact-checkers scrambled to identify them as false as they spread across social media."

"Everybody recognizes that no one tech company, no one government, no one civil society organization is able to deal with the advent of this technology and its possible nefarious use on their own," Nick Clegg, president of global affairs for Meta, told the Associated Press in an interview before the summit.

Over 4 billion people are set to vote in key elections this year in more than 40 countries,. including the US, The Hill reported.

If you're concerned about how deepfakes may be used to scam you or your family members -- someone calls your grandfather and asks them for money by pretending to be you -- Bloomberg reporter Rachel Metz has a good idea. She suggests it may be time for us all to create a "family password" or safe word or phrase to share with our family or personal network that we can ask for to make sure we're talking to who we think we're talking to.

"Extortion has never been easier," Metz reports. "The kind of fakery that used to take time, money and technical know-how can now be accomplished quickly and cheaply by nearly anyone."

That's where family passwords come in, since they're "simple and free," Metz said. "Pick a word that you and your family (or another trusted group) can easily remember. Then, if one of those people reaches out in a way that seems a bit odd -- say, they're suddenly asking you to deliver 5,000 gold bars to a P.O. Box in Alaska -- first ask them what the password is."

How do you pick a good password? She offers a few suggestions, including using a word you don't say frequently and that's not likely to come up in casual conversations. Also, "avoid making the password the name of a pet, as those are easily guessable."

Hiring experts have told me it's going to take years to build an AI-educated workforce, considering that gen AI tools like ChatGPT weren't released until late 2022. So it makes sense that learning platforms like Coursera, Udemy, Udacity, Khan Academy and many universities are offering online courses and certificates to upskill today's workers. Now the University of Pennsylvania's School of Engineering and Applied Science said it's the first Ivy League school to offer an undergraduate major in AI.

"The rapid rise of generative AI is transforming virtually every aspect of life: health, energy, transportation, robotics, computer vision, commerce, learning and even national security," Penn said in a Feb. 13 press release. "This produces an urgent need for innovative, leading-edge AI engineers who understand the principles of AI and how to apply them in a responsible and ethical way."

The bachelor of science in AI offers coursework in machine learning, computing algorithms, data analytics and advanced robotics and will have students address questions about "how to align AI with our social values and how to build trustworthy AI systems," Penn professor Zachary Ives said.

"We are training students for jobs that don't yet exist in fields that may be completely new or revolutionized by the time they graduate," added Robert Ghrist, associate dean of undergraduate education in Penn Engineering.

FYI, the cost of an undergraduate education at Penn, which typically spans four years, is over $88,000 per year (including housing and food).

For those not heading to college or who haven't signed up for any of those online AI certificates, their AI upskilling may come courtesy of their current employee. The Boston Consulting Group, for its Feb. 9 report, What GenAI's Top Performer Do Differently, surveyed over 150 senior executives across 10 sectors. Generally:

Bottom line: companies are starting to look at existing job descriptions and career trajectories, and the gaps they're seeing in the workforce when they consider how gen AI will affect their businesses. They've also started offering gen AI training programs. But these efforts don't lessen the need for today's workers to get up to speed on gen AI and how it may change the way they work -- and the work they do.

In related news, software maker SAP looked at Google search data to see which states in the US were most interested in "AI jobs and AI business adoption."

Unsurprisingly, California ranked first in searches for "open AI jobs" and "machine learning jobs." Washington state came in second place, Vermont in third, Massachusetts in fourth and Maryland in fifth.

California, "home to Silicon Valley and renowned as a global tech hub, shows a significant interest in AI and related fields, with 6.3% of California's businesses saying that they currently utilize AI technologies to produce goods and services and a further 8.4% planning to implement AI in the next six months, a figure that is 85% higher than the national average," the study found.

Virginia, New York, Delaware, Colorado and New Jersey, in that order, rounded out the top 10.

Over the past few months, I've highlighted terms you should know if you want to be knowledgeable about what's happening as it relates to gen AI. So I'm going to take a step back this week and provide this vocabulary review for you, with a link to the source of the definition.

It's worth a few minutes of your time to know these seven terms.

Anthropomorphism: The tendency for people to attribute humanlike qualities or characteristics to an AI chatbot. For example, you may assume it's kind or cruel based on its answers, even though it isn't capable of having emotions, or you may believe the AI is sentient because it's very good at mimicking human language.

Artificial general intelligence (AGI): A description of programs that are as capable as -- or even more capable than -- than a human. While full general intelligence is still off in the future, models are growing in sophistication. Some have demonstrated skills across multiple domains ranging from chemistry to psychology, with task performance paralleling human benchmarks.

Generative artificial intelligence (gen AI): Technology that creates content -- including text, images, video and computer code -- by identifying patterns in large quantities of training data and then creating original material that has similar characteristics.

Hallucination: Hallucinations are unexpected and incorrect responses from AI programs that can arise for reasons that aren't yet fully known. A language model might suddenly bring up fruit salad recipes when you were asking about planting fruit trees. It might also make up scholarly citations, lie about data you ask it to analyze or make up facts about events that aren't in its training data. It's not fully understood why this happens but can arise from sparse data, information gaps and misclassification.

Large language model (LLM): A type of AI model that can generate human-like text and is trained on a broad dataset.

Prompt engineering: This is the act of giving AI an instruction so it has the context it needs to achieve your goal. Prompt engineering is best associated with OpenAI's ChatGPT, describing the tasks users feed into the algorithm. (e.g. "Give me five popular baby names.")

Temperature: In simple terms, model temperature is a parameter that controls how random a language model's output is. A higher temperature means the model takes more risks, giving you a diverse mix of words. On the other hand, a lower temperature makes the model play it safe, sticking to more focused and predictable responses.

Model temperature has a big impact on the quality of the text generated in a bunch of [natural language processing] tasks, like text generation, summarization and translation.

The tricky part is finding the perfect model temperature for a specific task. It's kind of like Goldilocks trying to find the perfect bowl of porridge -- not too hot, not too cold, but just right. The optimal temperature depends on things like how complex the task is and how much creativity you're looking for in the output.

Editors' note: CNET is using an AI engine to help create some stories. For more, seethis post.

Read the original post:

AI and You: OpenAI's Sora Previews Text-to-Video Future, First Ivy League AI Degree - CNET

Posted in Ai

Google launches Gemini Business AI, adds $20 to the $6 Workspace bill – Ars Technica

Google

Google went ahead with plans to launch Gemini for Workspace today. The big news is the pricing information, and you can see the Workspace pricing page is new, with every plan offering a "Gemini add-on." Google's old AI-for-Business plan, "Duet AI for Google Workspace," is dead, though it never really launched anyway.

Google has a blog post explaining the changes. Google Workspace starts at $6 per user per month for the "Starter" package, and the AI "Add-on," as Google is calling it, is an extra $20 monthly cost per user (all of these prices require an annual commitment). That is a massive price increase over the normal Workspace bill, but AI processing is expensive. Google says this business package will get you "Help me write in Docs and Gmail, Enhanced Smart Fill in Sheets and image generation in Slides." It also includes the "1.0 Ultra" model for the Gemini chatbotthere's a full feature list here. This $20 plan is subject to a usage limit for Gemini AI features of "1,000 times per month."

Google

Google's second plan is "Gemini Enterprise," which doesn't come with any usage limits, but it's also only available through a "contact us" link and not a normal checkout procedure. Enterprise is $30 per user per month, and it "includes additional capabilities for AI-powered meetings, where Gemini can translate closed captions in more than 100 language pairs, and soon even take meeting notes."

More here:

Google launches Gemini Business AI, adds $20 to the $6 Workspace bill - Ars Technica

Posted in Ai

Can AI help us forecast extreme weather? – Vox.com

Weve learned how to predict weather over the past century by understanding the science that governs Earths atmosphere and harnessing enough computing power to generate global forecasts. But in just the past three years, AI models from companies like Google, Huawei, and Nvidia that use historical weather data have been releasing forecasts rivaling those created through traditional forecasting methods.

This video explains the promise and challenges of these new models built on artificial intelligence rather than numerical forecasting, particularly the ability to foresee extreme weather.

Additional reading:

You can find this video and all of Voxs videos on YouTube.

This video is sponsored by Microsoft Copilot for Microsoft 365. Microsoft has no editorial influence on our videos, but their support makes videos like these possible.

Yes, I'll give $5/month

Yes, I'll give $5/month

We accept credit card, Apple Pay, and Google Pay. You can also contribute via

Read more:

Can AI help us forecast extreme weather? - Vox.com

Posted in Ai

Scale AI to set the Pentagon’s path for testing and evaluating large language models – DefenseScoop

The Pentagons Chief Digital and Artificial Intelligence Office (CDAO) tapped Scale AI to produce a trustworthy means for testing and evaluating large language models that can support and potentially disrupt military planning and decision-making.

According to a statement the San Francisco-based company shared exclusively with DefenseScoop, the outcomes of this new one-year contract will supply the CDAO with a framework to deploy AI safely by measuring model performance, offering real-time feedback for warfighters, and creating specialized public sector evaluation sets to test AI models for military support applications, such as organizing the findings from after action reports.

Large language models and the overarching field of generative AI include emerging technologies that can generate (convincing but not always accurate) text, software code, images and other media, based on prompts from humans.

This rapidly evolving realm holds a lot of promise for the Department of Defense, but also poses unknown and serious potential challenges. Last year, Pentagon leadership launched Task Force Lima within the CDAOs Algorithmic Warfare Directorate to accelerate its components grasp, assessment and deployment of generative artificial intelligence.

The department has long leaned on test-and-evaluation (T&E) processes to assess and ensure its systems, platforms and technologies perform in a safe and reliable manner before they are fully fielded. But AI safety standards and policies have not yet been universally set, and the complexities and uncertainties associated with large language models make T&E even more complicated when it comes to generative AI.

Broadly, T&E enables experts to determine the baseline performance of a specific model.

For instance, to test and evaluate a computer vision algorithm that differentiates between images of dogs and cats and things that are not dogs or cats, an official might first train it with millions of different pictures of those type of animals as well as objects that arent dogs or cats. In doing so, the expert will also hold back a diverse subset of data that can then be presented to the algorithm down the line.

They can then assess that evaluation dataset against the test set, or ground truth, and ultimately determine failure rates of where the model is unable to determine if something is or is not one of the classifiers theyre trying to identify.

Experts at Scale AI will adopt a similar approach for T&E with large language models, but because they are generative in nature and the English language can be hard to evaluate, there isnt that same level of ground truth for these complex systems. For example, if prompted to supply five different responses, an LLM might be generally factually accurate in all five, yet contrasting sentence structures could change the meanings of each output.

So, part of the companys effort to develop the framework, methods and technology CDAO can use to test and evaluate large language models will involve creating holdout datasets where they include DOD insiders to prompt response pairs and adjudicate them by layers of review, and ensure that each is as good of a response as would be expected from a human in the military.

The entire process will be iterative in nature.

Once datasets that are germane to the DOD for world knowledge, truthfulness, and other topics are made and refined, the experts can then evaluate existing large language models against them.

Eventually, as they have these holdout datasets, experts will be able to run evaluations and establish model cards or short documents that supply details on the context for best for use of various machine learning models and information for measuring their performance.

Officials plan to automate in this development as much as possible, so that as new models come in, there can be some baseline understanding of how they will perform, where they will perform best, and where they will probably start to fail.

Further in the process, the ultimate intent is for models to essentially send signals to CDAO officials that engage with them, if they start to waver from the domains they have been tested against.

This work will enable the DOD to mature its T&E policies to address generative AI by measuring and assessing quantitative data via benchmarking and assessing qualitative feedback from users. The evaluation metrics will help identify generative AI models that are ready to support military applications with accurate and relevant results using DoD terminology and knowledge bases. The rigorous T&E process aims to enhance the robustness and resilience of AI systems in classified environments, enabling the adoption of LLM technology in secure environments, Scale AIs statement reads.

Beyond the CDAO, the company has also partnered with Meta, Microsoft, the U.S. Army, the Defense Innovation Unit, OpenAI, General Motors, Toyota Research Institute, Nvidia, and others.

Testing and evaluating generative AI will help the DoD understand the strengths and limitations of the technology, so it can be deployed responsibly. Scale is honored to partner with the DoD on this framework, Alexandr Wang, Scale AIs founder and CEO, said in the statement.

Continue reading here:

Scale AI to set the Pentagon's path for testing and evaluating large language models - DefenseScoop

Posted in Ai

What is AI governance? – Cointelegraph

The landscape and importance of AI governance

AI governance encompasses the rules, principles and standards that ensure AI technologies are developed and used responsibly.

AI governance is a comprehensive term encompassing the definition, principles, guidelines and policies designed to steer the ethical creation and utilization of artificial intelligence (AI) technologies. This governance framework is crucial for addressing a wide array of concerns and challenges associated with AI, such as ethical decision-making, data privacy, bias in algorithms, and the broader impact of AI on society.

The concept of AI governance extends beyond mere technical aspects to include legal, social and ethical dimensions. It serves as a foundational structure for organizations and governments to ensure that AI systems are developed and deployed in beneficial ways that do not cause unintentional harm.

In essence, AI governance forms the backbone of responsible AI development and usage, providing a set of standards and norms that guide various stakeholders, including AI developers, policymakers and end-users. By establishing clear guidelines and ethical principles, AI governance aims to harmonize the rapid advancements in AI technology with the societal and ethical values of human communities.

AI governance adapts to organizational needs without fixed levels, employing frameworks like NIST and OECD for guidance.

AI governance doesnt follow universally standardized levels, as seen in fields like cybersecurity. Instead, it utilizes structured approaches and frameworks from various entities, allowing organizations to tailor these to their specific requirements.

Frameworks, such as the National Institute Of Standards and Technology (NIST) AI Risk Management Framework, the Organization for Economic Co-operation and Development (OECD) principles on artificial intelligence, and the European Commissions Ethics Guidelines for Trustworthy AI, are among the most utilized. They cover many topics, including transparency, accountability, fairness, privacy, security and safety, providing a solid foundation for governance practices.

The extent of governance adoption varies with the organizations size, the complexity of the AI systems it employs, and the regulatory landscape it operates within. Three main approaches to AI governance are:

The most basic form relies on an organizations core values and principles, with some informal processes in place, such as ethical review boards, but lacking a formal governance structure.

A more structured approach than informal governance involves creating specific policies and procedures in response to particular challenges. However, it may not be comprehensive or systematic.

The most comprehensive approach entails the development of an extensive AI governance framework that reflects the organizations values, aligns with legal requirements and includes detailed risk assessment and ethical oversight processes.

Illustrating AI governance through diverse examples like GDPR, the OECD AI principles and corporate ethics boards showcases the multifaceted approach to responsible AI use.

AI governance manifests through various policies, frameworks and practices aimed at ethically deploying AI technologies through organizations and governments. These instances highlight the application of AI governance across different scenarios:

The General Data Protection Regulation (GDPR) is a pivotal example of AI governance in safeguarding personal data and privacy. Although the GDPR isnt solely AI-focused, its regulations significantly impact AI applications, particularly those processing personal data within the European Union, emphasizing the need for transparency and data protection.

The OECD AI principles, endorsed by over 40 countries, underscore the commitment to trustworthy AI. These principles advocate for AI systems to be transparent, fair and accountable, guiding international efforts toward responsible AI development and usage.

Corporate AI Ethics Boards represent an organizational approach to AI governance. Numerous corporations have instituted ethics boards to supervise AI projects, ensuring they conform to ethical norms and societal expectations. For instance, IBMs AI Ethics Council reviews AI offerings to ensure they comply with the companys AI ethics, involving a diverse team from various disciplines to provide comprehensive oversight.

Stakeholder engagement is essential for developing inclusive, effective AI governance frameworks that reflect a broad spectrum of perspectives.

A wide range of stakeholders, including governmental entities, international organizations, business associations and civil society organizations, are in charge of AI governance. Because different areas and nations have different legal, cultural and political contexts, their oversight structures can also differ significantly.

The complexity of AI governance requires active participation from all sectors of society, including government, industry, academia and civil society. Engaging a diverse range of stakeholders ensures that multiple perspectives are considered when developing AI governance frameworks, leading to more robust and inclusive policies.

This engagement also fosters a sense of shared responsibility for the ethical development and use of AI technologies. By involving stakeholders in the governance process, policymakers can leverage a wide range of expertise and insights, ensuring that AI governance frameworks are well-informed, adaptable and capable of addressing the multifaceted challenges and opportunities presented by AI.

For instance, the exponential growth of data collection and processing raises significant privacy concerns, necessitating stringent governance frameworks to protect an individuals personal information. This involves compliance with global data protection regulations like GDPR and active participation by stakeholders in implementing advanced data security technologies to prevent unauthorized access and data breaches.

The future of AI governance will be shaped by advancements in technology, evolving societal values and the need for international collaboration.

As AI technologies evolve, so will the frameworks governing them. The future of AI governance is likely to see a greater emphasis on sustainable and human-centered AI practices.

Sustainable AI focuses on developing environmentally friendly and economically viable technologies over the long term. Human-centered AI prioritizes systems that enhance human capabilities and well-being, ensuring that AI serves as a tool for augmenting human potential rather than replacing it.

Moreover, the global nature of AI technologies necessitates international collaboration in AI governance. This involves harmonizing regulatory frameworks across borders, fostering global standards for AI ethics, and ensuring that AI technologies can be safely deployed across different cultural and regulatory environments. Global cooperation is key to addressing challenges, such as cross-border data flow and ensuring that AI benefits are shared equitably worldwide.

Read more here:

What is AI governance? - Cointelegraph

Posted in Ai

HOUSE LAUNCHES BIPARTISAN TASK FORCE ON ARTIFICIAL INTELLIGENCE – Congressman Ted Lieu

WASHINGTON Speaker Mike Johnson and Democratic Leader Hakeem Jeffries announced the establishment of a bipartisan Task Force on Artificial Intelligence (AI) to explore how Congress can ensure America continues to lead the world in AI innovation while considering guardrails that may be appropriate to safeguard the nation against current and emerging threats.

Speaker Johnson and Leader Jeffries have each appointed twelve members to the Task Force that represent key committees of jurisdiction and will be jointly led by Chair JayObernolte (CA-23) and Co-Chair Ted Lieu (CA-36). The Task Force will seek to produce a comprehensive report that will include guiding principles, forward-looking recommendations and bipartisan policy proposals developed in consultation with committees of jurisdiction.

Because advancements in artificial intelligence have the potential to rapidly transform our economy and our society, it is important for Congress to work in a bipartisan manner to understand and plan for both the promises and the complexities of this transformative technology, saidSpeaker Mike Johnson.I am happy to announce with Leader Jeffries this new Bipartisan Task Force on Artificial Intelligenceto ensure America continues leading in this strategic arena.

Led by Rep. JayObernolte (R-Ca.) and Rep. Ted Lieu (D-Ca.), the task force will bring together a bipartisan group of Members who have AI expertise and represent the relevant committees of jurisdiction. As we look to the future, Congress must continue to encourage innovation and maintain our countrys competitive edge, protect our national security, and carefully consider what guardrails may be needed to ensure the development of safe and trustworthy technology.

Congress has a responsibility to facilitate the promising breakthroughs that artificial intelligence can bring to fruition and ensure that everyday Americans benefit from these advancements in an equitable manner, saidDemocratic Leader HakeemJeffries.That is why I am pleased to join Speaker Johnson in announcing the new Bipartisan Task Force on Artificial Intelligence, led by Rep. Ted Lieu and Rep. Jay Obernolte.

The rise of artificial intelligence also presents a unique set of challenges and certain guardrails must be put in place to protect the American people. Congress needs to work in a bipartisan way to ensure that America continues to lead in this emerging space, while also preventing bad actors from exploiting this evolving technology. The Members appointed to this Task Force bring a wide range of experience and expertise across the committees of jurisdiction and I look forward to working with them to tackle these issues in a bipartisan way.

It is an honor to be entrusted by Speaker Johnson to serve as Chairman of the House Task Force on Artificial Intelligence, saidChair Jay Obernolte (CA-23).As new innovations in AI continue to emerge, Congress and our partners in federal government must keep up. House Republicans and Democrats will work together to create a comprehensive report detailing the regulatory standards and congressional actions needed to both protect consumers and foster continued investment and innovation in AI.

The United States has led the world in the development of advanced AI, and we must work to ensure that AI realizes its tremendous potential to improve the lives of people across our country. I look forward to working with Co-Chair Ted Lieu and the rest of the Task Force on this critical bipartisan effort.

Thank you to Leader Jeffries and Speaker Johnson for establishing this bipartisan House Task Force on Artificial intelligence. AI has the capability of changing our lives as we know it. The question is how to ensure AI benefits society instead of harming us. As a recovering Computer Science major, I know this will not be an easy or quick or one-time task, but I believe Congress has an essential role to play in the future of AI. I have been heartened to see so many Members of Congress of all political persuasions agree, saidCo-Chair Ted Lieu (CA-36).

I am honored to join Congressman Jay Obernolte in leading this Task Force on AI, and honored to work with the bipartisan Members on the Task Force. I look forward to engaging with Members of both the Democratic Caucus and Republican Conference, as well as the Senate, to find meaningful, bipartisan solutions with regards to AI.

Membership

Rep. Ted Lieu (CA-36),Co-Chair Rep. Anna Eshoo (CA-16) Rep. Yvette Clarke (NY-09) Rep. Bill Foster (IL-11) Rep. Suzanne Bonamici (OR-01) Rep. Ami Bera (CA-06) Rep. Don Beyer (VA-08) Rep. Alexandria Ocasio-Cortez (NY-14) Rep. Haley Stevens (MI-11) Rep. Sara Jacobs (CA-51) Rep. Valerie Foushee (NC-04) Rep. Brittany Pettersen (CO-07)

Rep. Jay Obernolte (CA-23),Chair Rep. Darrell Issa (CA-48) Rep. French Hill (AR-02) Rep. Michael Cloud (TX-27) Rep. Neal Dunn (FL-02) Rep. Ben Cline (VA-06) Rep. Kat Cammack (FL-03) Rep. Scott Franklin (FL-18) Rep. Michelle Steel (CA-45) Rep. Eric Burlison (MO-07) Rep. Laurel Lee (FL-15) Rep. Rich McCormick (GA-06)

###

Go here to see the original:

HOUSE LAUNCHES BIPARTISAN TASK FORCE ON ARTIFICIAL INTELLIGENCE - Congressman Ted Lieu

Posted in Ai

Nvidia’s Q4 Earnings Blow Past Expectations as Company Benefits From AI Boom – Investopedia

Key Takeaways

Nvidia Corp. (NVDA)posted revenue and earnings for its fiscal fourth quarter that blew past market expectations, as the company continues to benefit from booming demand for equipment and services to support artificial intelligence (AI).

Shares of the company, which had fallen for four consecutive sessions ahead of Wednesday's eagerly anticipated earnings release, gained 9.1% to $735.94 in after-hours trading.

Nvidia said that revenue jumped to $22.10 billion in the quarter ending Jan. 28, compared with $6.05 billion a year earlier. Net income increased to $12.29 billion from $1.41 billion, while diluted earnings per share came in at $4.93, up from 57 cents a year earlier. Each of those numbers handily topped analysts' expectations.

Revenue for Nvidia's closely watched data-center business, which offers cloud and AI services, jumped to $18.40 billion, a five-fold increase from the year-ago period and also well above expectations.

Accelerated computing and generative AI have hit the tipping point. Demand is surging worldwide across companies, industries and nations, Nvidia CEO Jensen Huang said in a press release, noting that the data center business has "increasingly diverse drivers."

"Vertical industriesled by auto, financial services and healthcareare now at a multibillion-dollar level," Huang added.

Nvidia's gross margin for the fourth quarter was 76%, up from 63.3% in the year-ago period. Nvidia's chief financial officer, Colette Kress, said the improvement was a function of the growth in the data center business, which wasprimarily driven by Nvidia's Hopper GPU computing platform.

Looking ahead, Nvidia says that fiscal first-quarter revenue is expected to come in at $24 billion, plus or minus 2%, which is above the consensus view from analysts. The company expects gross margin in the current quarter to rise slightly from the fourth-quarter figure.

Optimism around artificial intelligence helped push Nvidia's stock, which has more than tripled in the past year, to an all-time high last week. In the days leading up to the earnings release, analysts had raised their expectations even as investors expressed some concerns that the quarterly report might fall short of expectations.

The strong earnings report not only lifted Nvidia in extended trading but gave a boost to other chipmakers that have been riding the AI wave. Shares of Advanced Micro Devices (AMD), ARM Holdings (ARM), Broadcom (AVGO), Taiwan Semiconductor (TSM) and Super Micro Computer (SMCI) were all moving higher late Wednesday.

UPDATE: This article has been updated after initial publication to add comments from company executives, additional details from the earnings report and updated share prices.

Go here to read the rest:

Nvidia's Q4 Earnings Blow Past Expectations as Company Benefits From AI Boom - Investopedia

Posted in Ai

Satya Nadella says the explicit Taylor Swift AI fakes are ‘alarming and terrible’ – The Verge

Microsoft CEO Satya Nadella has responded to a controversy over sexually explicit AI-made fake images of Taylor Swift. In an interview with NBC Nightly News that will air next Tuesday, Nadella calls the proliferation of nonconsensual simulated nudes alarming and terrible, telling interviewer Lester Holt that I think it behooves us to move fast on this.

In a transcript distributed by NBC ahead of the January 30th show, Holt asks Nadella to react to the internet exploding with fake, and I emphasize fake, sexually explicit images of Taylor Swift. Nadellas response manages to crack open several cans of tech policy worms while saying remarkably little about them which isnt surprising when theres no surefire fix in sight.

I would say two things: One, is again I go back to what I thinks our responsibility, which is all of the guardrails that we need to place around the technology so that theres more safe content thats being produced. And theres a lot to be done and a lot being done there. But it is about global, societal you know, Ill say, convergence on certain norms. And we can do especially when you have law and law enforcement and tech platforms that can come together I think we can govern a lot more than we think we give ourselves credit for.

Microsoft might have a connection to the faked Swift pictures. A 404 Media report indicates they came from a Telegram-based nonconsensual porn-making community that recommends using the Microsoft Designer image generator. Designer theoretically refuses to produce images of famous people, but AI generators are easy to bamboozle, and 404 found you could break its rules with small tweaks to prompts. While that doesnt prove Designer was used for the Swift pictures, its the kind of technical shortcoming Microsoft can tackle.

But AI tools have massively simplified the process of creating fake nudes of real people, causing turmoil for women who have far less power and celebrity than Swift. And controlling their production isnt as simple as making huge companies bolster their guardrails. Even if major Big Tech platforms like Microsofts are locked down, people can retrain open tools like Stable Diffusion to produce NSFW pictures despite attempts to make that harder. Far fewer users might access these generators, but the Swift incident demonstrates how widely a small communitys work can spread.

There are other stopgap options like social networks limiting the reach of nonconsensual imagery or, apparently, Swiftie-imposed vigilante justice against people who spread them. (Does that count as convergence on certain norms?) For now, though, Nadellas only clear plan is putting Microsofts own AI house in order.

Go here to read the rest:

Satya Nadella says the explicit Taylor Swift AI fakes are 'alarming and terrible' - The Verge

Posted in Ai

One month with Microsoft’s AI vision of the future: Copilot Pro – The Verge

Microsofts Copilot Pro launched last month as a $20 monthly subscription that provides access to AI-powered features inside some Office apps, alongside priority access to the latest OpenAI models and improved image generation.

Ive been testing Copilot Pro over the past month to see if its worth the $20 subscription for my daily needs and just how good or bad the AI image and text generation is across Office apps like Word, Excel, and PowerPoint. Some of the Copilot Pro features are a little disappointing right now, whereas others are truly useful improvements that Im not sure I want to live without.

Lets dig into everything you get with Copilot Pro right now.

One of the main draws of subscribing to Copilot Pro is an improved version of Designer, Microsofts image creation tool. Designer uses OpenAIs DALL-E 3 model to generate content, and the paid Copilot Pro version creates widescreen images with far more detail than the free version.

Ive been using Designer to experiment with images, and Ive found it particularly impressive when you feed it as much detail as possible. Asking Designer for an image of a dachshund sitting by a window staring at a slice of bacon generates some good examples, but you can get Designer to do much more with some additional prompting. Adding in more descriptive language to generate a hyper-real painting with natural lighting, medium shot, and shallow depth of field will greatly improve image results.

As you can see in the two examples below, Designer gets the natural lighting correct, with some depth of field around the bacon. Unfortunately, there are multiple slices of bacon here instead of just one, and theyre giant pieces of bacon.

Like most things involving AI, the Designer feature isnt perfect. I generated another separate image of a dog staring at bacon, and a giant piece of bacon was randomly inserted. In fact, Id say most times only one or two of the four images that are produced are usable. DALL-E 3 still struggles with text, too, particularly if you ask Designer to add labels or signs that have text written on them.

It did a good job of an illustrated image of a UPS delivery man from 1910. In the style of early Japanese cartoons, though, adding the UPS logo in even if its a little wonky. Copilot Pro lets you generate 100 images per day, and it does so much faster than the free version.

Copilot Pro isnt all about image generation, though. This subscription unlocks the AI capabilities inside Office apps. Inside Word, you can use Copilot to generate text, which can be helpful for getting an outline of a document started or refining paragraphs.

If you have numerical data, you can also get Copilot to visualize this data as a graph or table, which is particularly useful for making text-heavy documents a little easier to read. If you highlight text, a little Copilot logo appears to nudge you into selecting it to rewrite that text or visualize it. If you select an entire paragraph, Copilot will try to rewrite it with different options you can cycle through and pick.

Like the image generation, the paragraph rewriting can be a little hit-and-miss, introducing different meaning to sentences by swapping out words. Overall, I didnt find that it improved my writing. For someone who doesnt write regularly, it might be a lot more useful.

Copilot in Outlook has been super useful to me personally. I use it every day to check summaries of emails, which helpfully appear at the top of emails. This might even tempt me to buy Copilot Pro just for this feature because it saves me so much time when Im planning a project with multiple people.

Its also really helpful when you have a long-running email thread to just get a quick summary of all the key information. You can also use Copilot in Outlook to generate emails or craft replies. Much like Word, theres a rewrite tool here that lets you write a draft email thats then analyzed to produce suggestions for improving the tone or clarity of an email.

Copilot in PowerPoint is equally useful if youre not used to creating presentations. You can ask it to generate slides in a particular style, and youll get an entire deck back within seconds. Designer is part of this feature, so you can dig into each individual slide and modify the images or text.

As someone who hates creating presentations, this is something I will absolutely use in the future. It certainly beats the PowerPoint templates you can find online. I did run into some PowerPoint slide generation issues, though, particularly where Copilot would sit there saying, Still working on it, and not finish generating the slides.

Copilot in Excel seems to be the most limited part of the Copilot Pro experience right now. You need your data neatly arranged in a table. Otherwise, Copilot will want to convert it. Once you have data that works with Copilot, you can create visualizations, use data insights to create pivot tables, or even get formula suggestions. Copilot for Excel is still in preview, so Id expect well see even more functionality here over time.

The final example of Copilot inside Office apps is OneNote. Much like Word, you can draft notes or plans here and easily rewrite text. Copilot also offers summaries of your notes, which can be particularly amusing if you attempt to summarize shorthand notes or incomplete notes that only make sense to your brain.

Microsoft is also rolling out a number of GPTs for fitness, travel, and cooking. These are essentially individual assistants inside Copilot that can help you find recipes, plan out a vacation itinerary, or create a personalized workout plan. Copilot Pro subscribers will soon be able to build their own custom GPTs around specific topics, too.

Overall, I think Copilot Pro is a good start for Microsofts consumer AI efforts, but Im not sure Id pay $20 a month just yet. The image generation improvements are solid here and might be worth $20 a month for some.

Email summaries in Outlook tempt me into the subscription, but the text generation features arent really all that unique in the Office apps. I feel like you can get just as good results using the free version of Copilot or even ChatGPT, but youll have to do the manual (and less expensive) option of copying and pasting the results into a document.

The consumer Copilot Pro isnt as fully featured as the commercial version just yet, so Id expect well see a lot of improvements over the coming months. Microsoft is showing no sign of slowing down with its AI efforts, and the company is set to detail more of its AI plans at Build in May.

See more here:

One month with Microsoft's AI vision of the future: Copilot Pro - The Verge

Posted in Ai

Samsung’s new phones replace Google AI with Baidu in China – The Verge

The list of AI translation, summarization, and text formatting features on the Chinese version of the Galaxy S24 will be familiar to anyone who kept up with its US-based launch. Theres also real-time call translation like we saw earlier this month, and the phones are even getting a version of Googles Circle to Search feature.

Now featuring Ernies understanding and generation capabilities

Now featuring Ernies understanding and generation capabilities, the upgraded Samsung Note Assistant can translate content and also summarize lengthy content into clear, intelligently organized formats at the click of a button, Samsung Electronics China and Baidu said in a statement published by CNBC.

Samsungs hold on China has waned considerably in the last decade. A report this week from IDC didnt even place it in the top five brands for mobile shipments in 2023. In 2013, the company was the biggest smartphone manufacturer in the country, with a market share of around 20 percent, but its share had fallen to just 1 percent by 2018, where its hovered ever since, Reuters reports, adding that partnerships with local content firms are part of its attempt to rebuild its Chinese business.

The rest is here:

Samsung's new phones replace Google AI with Baidu in China - The Verge

Posted in Ai

Researchers Say the Deepfake Biden Robocall Was Likely Made With Tools From AI Startup ElevenLabs – WIRED

This is not the first time that researchers have suspected ElevenLabs tools were used for political propaganda. Last September, NewsGuard, a company that tracks online misinformation, claimed that TikTok accounts sharing conspiracy theories using AI-generated voices, including a clone of Barack Obamas voice, used ElevenLabs technology. Over 99 percent of users on our platform are creating interesting, innovative, useful content, ElevenLabs said in an emailed statement to The New York Times at the time, but we recognize that there are instances of misuse, and weve been continually developing and releasing safeguards to curb them.

If the Pindrop and Berkeley analyses are correct, the deepfake Biden robocall was made with technology from one of the tech industrys most prominent and well-funded AI voice startups. As Farid notes, ElevenLabs is already seen as providing some of the highest-quality synthetic voice offerings on the market.

According to the companys CEO in a recent Bloomberg article, ElevenLabs is valued by investors at more than $1.1 billion. In addition to Andreessen Horowitz, its investors include prominent individuals like Nat Friedman, former CEO of GitHub, and Mustafa Suleyman, cofounder of AI lab DeepMind, now part of Alphabet. Investors also include firms like Sequoia Capital and SV Angel.

With its lavish funding, ElevenLabs is arguably better positioned than other AI startups to pour resources into creating effective safeguards against bad actorsa task made all the more urgent by the upcoming presidential elections in the United States. Having the right safeguards is important, because otherwise anyone can create any likeness of any person, Balasubramaniyan says. As we're approaching an election cycle, it's just going to get crazy.

A Discord server for ElevenLabs enthusiasts features people discussing how they intend to clone Bidens voice, and sharing links to videos and social media posts highlighting deepfaked content featuring Biden or AI-generated dupes of Donald Trump and Barack Obamas voices.

Although ElevenLabs is a market leader in AI voice cloning, in just a few years the technology has become widely available for companies and individuals to experiment with. That has created new business opportunities, such as creating audiobooks more cheaply, but also increases the potential for malicious use of the technology. We have a real problem, says Sam Gregory, program director at the nonprofit Witness, which helps people use technology to promote human rights. When you have these very broadly available tools, it's quite hard to police.

While the Pindrop and Berkeley analyses suggest it could be possible to unmask the source of AI-generated robocalls, the incident also underlines how underprepared authorities, the tech industry, and the public are as the 2024 election season ramps up. It is difficult for people without specialist expertise to confirm the provenance of audio clips or check whether they are AI-generated. And more sophisticated analyses might not be completed quickly enough to offset the damage caused by AI-generated propaganda.

Journalists and election officials and others don't have access to reliable tools to be doing this quickly and rapidly when potentially election-altering audio gets leaked or shared, Gregory says. If this had been something that was relevant on election day, that would be too late.

Updated 1-27-2024, 3:15 pm EST: This article was updated to clarify the attribution of the statement from ElevenLabs. Updated 1-26-2024, 7:20 pm EST: This article was updated with comment from ElevenLabs.

Follow this link:

Researchers Say the Deepfake Biden Robocall Was Likely Made With Tools From AI Startup ElevenLabs - WIRED

Posted in Ai

2024 health tech budgets to be driven by AI tools, automation – STAT

Hospitals and clinics are expecting a slightly better 2024 compared to last year thanks to a return to mostly in-person care, patients resuming preventive visits and the gradual easing of labor costs and shortages. Still, the evaporation of pandemic-related emergency funding will deal a blow to resource-strained health systems, and leaders say theyll ramp up tech investments, including in artificial intelligence-based tools.

Health care providers financial performance isnt uniform and varies widely by setting, like rural or urban, as well as size and range of services offered. But they already showed signs of gradual improvement in 2023, leaving more funds available for technology investments, analysts told STAT.

HCA Healthcare, a for-profit health system spanning more than 180 hospitals and 2,000 sites across the country, brought in nearly $48 billion in the first three quarters of 2023, compared to about $45 billion in that same period in 2022. Highmark Health, which operates both a health insurance business and a 14-hospital network in Pennsylvania and New York, took in about $20 billion in revenue in that same period, up 4% from 2022.

Get unlimited access to award-winning journalism and exclusive events.

Continue reading here:

2024 health tech budgets to be driven by AI tools, automation - STAT

Posted in Ai

AI and satellite data helped uncover the ocean’s ‘dark vessels’ – Popular Science

Researchers can now access artificial intelligence analysis of global satellite imagery archives for an unprecedented look at humanitys impact and relationship to our oceans. Led by Global Fishing Watch, a Google-backed nonprofit focused on monitoring maritime industries, the open source project is detailed in a study published January 3 in Nature. It showcases never-before-mapped industrial effects on aquatic ecosystems thanks to recent advancements in machine learning technology.

The new research shines a light on dark fleets, a term often referring to the large segment of maritime vessels that do not broadcast their locations. According to Global Fishing Watchs Wednesday announcement, as much as 75 percent of all industrial fishing vessels are hidden from public view.

As The Verge explains, maritime watchdogs have long relied on the Automatic Identification System (AIS) to track vessels radio activity across the globeall the while knowing the tool was far from perfect. AIS requirements differ between countries and vessels, and its easy to simply turn off a ships transponder when a crew wants to stay off the grid. Hence the (previously murky) realm of dark fleets.

On land, we have detailed maps of almost every road and building on the planet. In contrast, growth in our ocean has been largely hidden from public view, David Kroodsma, the nonprofits director of research and innovation, said in an official statement on January 3. This study helps eliminate the blindspots and shed light on the breadth and intensity of human activity at sea.

[Related: How to build offshore wind farms in harmony with nature.]

To solve this data void, researchers first collected 2 million gigabytes of global imaging data taken by the European Space Agencys Sentinel-1 satellite constellation between 2017 and 2021. Unlike AIS limitations, the ESA satellite arrays sensitive radar technology allows it to detect surface activity or movement, regardless of cloud coverage or time of day.

From there, the team combined this information with GPS data to highlight otherwise undetected or overlooked ships. A machine learning program then analyzed the massive information sets to pinpoint previously undocumented fishing vessels.

The newest findings upend previous industry assumptions, and showcase the troublingly larger impact of dark fleets around the world.

Publicly available data wrongly suggests that Asia and Europe have similar amounts of fishing within their borders, but our mapping reveals that Asia dominatesfor every 10 fishing vessels we found on the water, seven were in Asia while only one was in Europe, Jennifer Raynor, a study co-author and University of Wisconsin-Madison assistant professor of natural resource economics, said in the announcement. By revealing dark vessels, we have created the most comprehensive public picture of global industrial fishing available.

Its not all troubling revisions, however. According to the teams findings, the number of green offshore energy projects more than doubled over the five-year timespan analyzed. As of 2021, wind turbines officially outnumbered the worlds oil platforms, with China taking the lead by increasing its number of wind farms by 900 percent.

Previously, this type of satellite monitoring was only available to those who could pay for it. Now it is freely available to all nations, Kroodsma said in Wednesdays announcement, declaring the study as marking the beginning of a new era in ocean management and transparency.

Excerpt from:

AI and satellite data helped uncover the ocean's 'dark vessels' - Popular Science

Posted in Ai

AI is here and everywhere: 3 AI researchers look to the challenges ahead in 2024 – The Conversation Indonesia

2023 was an inflection point in the evolution of artificial intelligence and its role in society. The year saw the emergence of generative AI, which moved the technology from the shadows to center stage in the public imagination. It also saw boardroom drama in an AI startup dominate the news cycle for several days. And it saw the Biden administration issue an executive order and the European Union pass a law aimed at regulating AI, moves perhaps best described as attempting to bridle a horse thats already galloping along.

Weve assembled a panel of AI scholars to look ahead to 2024 and describe the issues AI developers, regulators and everyday people are likely to face, and to give their hopes and recommendations.

Casey Fiesler, Associate Professor of Information Science, University of Colorado Boulder

2023 was the year of AI hype. Regardless of whether the narrative was that AI was going to save the world or destroy it, it often felt as if visions of what AI might be someday overwhelmed the current reality. And though I think that anticipating future harms is a critical component of overcoming ethical debt in tech, getting too swept up in the hype risks creating a vision of AI that seems more like magic than a technology that can still be shaped by explicit choices. But taking control requires a better understanding of that technology.

One of the major AI debates of 2023 was around the role of ChatGPT and similar chatbots in education. This time last year, most relevant headlines focused on how students might use it to cheat and how educators were scrambling to keep them from doing so in ways that often do more harm than good.

However, as the year went on, there was a recognition that a failure to teach students about AI might put them at a disadvantage, and many schools rescinded their bans. I dont think we should be revamping education to put AI at the center of everything, but if students dont learn about how AI works, they wont understand its limitations and therefore how it is useful and appropriate to use and how its not. This isnt just true for students. The more people understand how AI works, the more empowered they are to use it and to critique it.

So my prediction, or perhaps my hope, for 2024 is that there will be a huge push to learn. In 1966, Joseph Weizenbaum, the creator of the ELIZA chatbot, wrote that machines are often sufficient to dazzle even the most experienced observer, but that once their inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away. The challenge with generative artificial intelligence is that, in contrast to ELIZAs very basic pattern matching and substitution methodology, it is much more difficult to find language sufficiently plain to make the AI magic crumble away.

I think its possible to make this happen. I hope that universities that are rushing to hire more technical AI experts put just as much effort into hiring AI ethicists. I hope that media outlets help cut through the hype. I hope that everyone reflects on their own uses of this technology and its consequences. And I hope that tech companies listen to informed critiques in considering what choices continue to shape the future.

Kentaro Toyama, Professor of Community Information, University of Michigan

In 1970, Marvin Minsky, the AI pioneer and neural network skeptic, told Life magazine, In from three to eight years we will have a machine with the general intelligence of an average human being. With the singularity, the moment artificial intelligence matches and begins to exceed human intelligence not quite here yet its safe to say that Minsky was off by at least a factor of 10. Its perilous to make predictions about AI.

Still, making predictions for a year out doesnt seem quite as risky. What can be expected of AI in 2024? First, the race is on! Progress in AI had been steady since the days of Minskys prime, but the public release of ChatGPT in 2022 kicked off an all-out competition for profit, glory and global supremacy. Expect more powerful AI, in addition to a flood of new AI applications.

The big technical question is how soon and how thoroughly AI engineers can address the current Achilles heel of deep learning what might be called generalized hard reasoning, things like deductive logic. Will quick tweaks to existing neural-net algorithms be sufficient, or will it require a fundamentally different approach, as neuroscientist Gary Marcus suggests? Armies of AI scientists are working on this problem, so I expect some headway in 2024.

Meanwhile, new AI applications are likely to result in new problems, too. You might soon start hearing about AI chatbots and assistants talking to each other, having entire conversations on your behalf but behind your back. Some of it will go haywire comically, tragically or both. Deepfakes, AI-generated images and videos that are difficult to detect are likely to run rampant despite nascent regulation, causing more sleazy harm to individuals and democracies everywhere. And there are likely to be new classes of AI calamities that wouldnt have been possible even five years ago.

Speaking of problems, the very people sounding the loudest alarms about AI like Elon Musk and Sam Altman cant seem to stop themselves from building ever more powerful AI. I expect them to keep doing more of the same. Theyre like arsonists calling in the blaze they stoked themselves, begging the authorities to restrain them. And along those lines, what I most hope for 2024 though it seems slow in coming is stronger AI regulation, at national and international levels.

Anjana Susarla, Professor of Information Systems, Michigan State University

In the year since the unveiling of ChatGPT, the development of generative AI models is continuing at a dizzying pace. In contrast to ChatGPT a year back, which took in textual prompts as inputs and produced textual output, the new class of generative AI models are trained to be multi-modal, meaning the data used to train them comes not only from textual sources such as Wikipedia and Reddit, but also from videos on YouTube, songs on Spotify, and other audio and visual information. With the new generation of multi-modal large language models (LLMs) powering these applications, you can use text inputs to generate not only images and text but also audio and video.

Companies are racing to develop LLMs that can be deployed on a variety of hardware and in a variety of applications, including running an LLM on your smartphone. The emergence of these lightweight LLMs and open source LLMs could usher in a world of autonomous AI agents a world that society is not necessarily prepared for.

These advanced AI capabilities offer immense transformative power in applications ranging from business to precision medicine. My chief concern is that such advanced capabilities will pose new challenges for distinguishing between human-generated content and AI-generated content, as well as pose new types of algorithmic harms.

The deluge of synthetic content produced by generative AI could unleash a world where malicious people and institutions can manufacture synthetic identities and orchestrate large-scale misinformation. A flood of AI-generated content primed to exploit algorithmic filters and recommendation engines could soon overpower critical functions such as information verification, information literacy and serendipity provided by search engines, social media platforms and digital services.

The Federal Trade Commission has warned about fraud, deception, infringements on privacy and other unfair practices enabled by the ease of AI-assisted content creation. While digital platforms such as YouTube have instituted policy guidelines for disclosure of AI-generated content, theres a need for greater scrutiny of algorithmic harms from agencies like the FTC and lawmakers working on privacy protections such as the American Data Privacy & Protection Act.

A new bipartisan bill introduced in Congress aims to codify algorithmic literacy as a key part of digital literacy. With AI increasingly intertwined with everything people do, it is clear that the time has come to focus not on algorithms as pieces of technology but to consider the contexts the algorithms operate in: people, processes and society.

Read the original post:

AI is here and everywhere: 3 AI researchers look to the challenges ahead in 2024 - The Conversation Indonesia

Posted in Ai

What software developers using ChatGPT can tell us about how it’s changing work – Quartz

In her first job since graduating from college, Eknoor Kaur works at a company where using AI chatbots is not unusual. At first the software engineer at Pathlight, which makes automation tools, was skeptical. But after a colleague mentioned that ChatGPT helped him work better and faster, she eased into the ideaand today, she doesnt spend a workday without it.

Kaur keeps ChatGPT open on her desktop, typically posing the bot four or five questions a day. She doesnt use the tool to write code because shes worried about the hallucinatory, or made-up, answers that AI chatbots can provide. Instead, Kaur uses the system like a search engine, asking it programming-related questions she doesnt want to burden coworkers with.

Not surprisingly, some of the earliest adopters of generative AI at work are software developers. Alongside ChatGPT maker OpenAI, companies like Microsoft and Salesforce have rolled out AI copilots, or digital assistants, for writing code. And while a slew of employers, including Apple, Bank of America, and Goldman Sachs, have blocked or limited ChatGPT on the job, things are different at plenty of tech companiesand startups in particular. Tech workers use a range of AI chatbots. Amazon developers, for instance, have their own version of ChatGPT called CodeWhisperer.

ChatGPT is currently in the giant room-size machine phase, said David Baggett, founder of cybersecurity firm Inky. He likens our current chatbots to the computers of the 1950s: early-stage and used for a narrow range of tasks.

But engineers are leading the charge in using those chatbots at work, even in their limited capacity.Developers Quartz spoke with use ChatGPT to generate code for softwaresaving themselves anywhere from minutes to hours a day on writingor to find information faster than using traditional online search methods.

Instead of starting on Google or Stack Overflow, a popular Q&A site for developers, which could take several pages or clicks to land on the right piece of code, developers can ask ChatGPT or another chatbot and get what they need with one prompt. I do a lot less Googling, said Amin Ahmad, CEO of search software company Vectara and a former researcher at Google.

Developers can also prompt chatbots to write code for themand adjust from there. Everything hasnt worked on the first try, said Cody De Arkland, head of technical marketing at tech management platform LaunchDarkly. De Arkland said he uses ChatGPT as one of his final steps to see if theres a better way to optimize his code, like writing it more efficiently. He uses a few AI chatbots, including GitHub Copilot, which is paid for by his employer.

Generative AI doesnt always work for Baggett, either. In his experience, ChatGPT sometimes spits out an answer that doesnt work at all.

At LaunchDarkly, De Arkland recalls a teammate who estimated that coding a complex pricing calculator would take roughly two monthsbut after using ChatGPT, wrote the code in just a week and a half. The obvious case for coding with chatbots is speed: projects are finished faster, and engineers say theyre shifting freed-up time into building better features.

Were not going to end in a place where theres not enough work to go around, De Arkland said. Theres always going to be projects and new things that have to be built to fill up that space.

But theres a limit to what software engineers will share with AI chatbots. For example, none of the developers Quartz spoke with said they would paste entire blocks of code into ChatGPT or other chatbots, wary that the AI tool could compromise data privacy, or have trouble understanding large volumes of text. For some, it wasnt clear if their employer had guardrails to prevent people from entering personal data into a chatbot.

In general, developers say that ChatGPT takes away boring baseline work. Catherine Yeo, an engineer at coding software maker Warp, has used her companys AI chatbot for nine months. Even today, she always marvels when it does return an answer and solves her problems.

Vectaras Ahmad notes that a chatbot allows him to find new solutions to a problem he wouldnt have initially considered when writing code. But as a developer working on AI technology, helike many non-technical workersworries his job could be automated away.

See the original post:

What software developers using ChatGPT can tell us about how it's changing work - Quartz

Posted in Ai

At Morgan State, seeking AI that is both smart and fair – Baltimore Sun

Your application for college or a mortgage loan. Whether youre correctly diagnosed in the doctors office, make it onto the short list for a job interview or get a shot at parole.

That bias can enter into these often life-altering decisions is nothing new. But today, with artificial intelligence assisting everyone from college admission directors to parole boards, a group of researchers at Morgan State University says the potential for racial, gender and other discrimination is amplified by magnitudes.

You automate the bias, you multiply and expand the bias, said Gabriella Waters, a director at a Morgan State center seeking to prevent just that. If youre doing something wrong, its going to do it in a big way.

Waters directs research and operations for the Baltimore universitys Center for Equitable Artificial Intelligence and Machine Learning Systems, or CEAMLS for short. Pronounced seamless, it indeed brings together specialists from across disciplines ranging from engineering to philosophy with the goal of harnessing the power of artificial intelligence while ensuring it doesnt introduce or spread bias.

AI is a catchall phrase for systems that can process large amounts of data quickly and, mimicking human cognitive functions such as detecting patterns, predict outcomes and recommend decisions.

But therein lies both its benefits and pitfalls: as data points are introduced, so, too, can bias enter in. Facial recognition systems were found more likely to misidentify Black and Asian people, for example, and Amazon dumped a recruiting program that favored male over female applicants.

Bias also cropped up in an algorithm used to assess the relative sickness of patients, and thus the level of treatment they should receive, because it was based on the amount of previous spending on health care meaning Black people, who are more likely to have lower incomes and less access to care to begin with, were erroneously scored as healthier than they actually were.

Dont blame the machines, though. They can only do what they do with what theyre given.

Its human beings that are the source of the data sets being correlated, Waters said. Not all of this is intentional. Its just human nature.

Data can obscure the actual truths, she said. You might find that ice cream sales are high in areas where a lot of shark attacks occur, Waters said, but that, of course, doesnt mean one causes the other.

The center at Morgan was created in July 2022 to find ways to address problems that already underlie existing AI systems, and create new technologies that avoid introducing bias.

As a historically Black university that has been boosting its research capacity in recent years, Morgan State is poised to put its own stamp on the AI field, said Kofi Nyarko, who is the CEAMLS director and a professor of electrical and computer engineering.

Morgan has a unique position here, Nyarko said. Yes, we have the experts in machine learning that we can pull from the sciences.

But also we have a mandate. We have a mission that seeks to not only advance the science, but make sure that we advance our community such that they are involved in that process and that advancement.

Morgan States AI research has been fueled by an influx of public and private funding by its calculations, nearly $18.5 million over the past 3 years.Many of the grants come from federal agencies, including the Office of Naval Research, which gave the university $9 million, the National Science Foundation and the National Institutes of Health.

Throughout the state, efforts are underway to catch up with the burgeoning field of AI, tapping into its potential while working to guard against any unintended consequences.

The General Assembly and Democratic Gov. Wes Moores administration have both been delving into AI, seeking to understand how it can be used to improve state government services and ensure that its applications meet values such as equity, security and privacy.

That was was part of the agenda of a Nov. 29 meeting of the General Assemblys Joint Committee on Cybersecurity, Information Technology, and Biotechnology, where some of Moores newly appointed technology officials briefed state senators and delegates on the use of the rapidly advancing technology in state government.

Its all moving very fast, said Nishant Shah, who in August was named Moores senior advisor for responsible AI. We dont know what we dont know.

Shah said hell be working to develop a set of AI principles and values that will serve as a North Star for procuring AI systems and monitoring them for any possible harm. State techstaff are also doing an inventory of AI already in use very little, according to a survey that drew limited response this summer and hoping to increase the knowledge and skills of personnel across the government, he said.

At Morgan, Nyarko said he is heartened by the amount of attention in the state and also federally on getting AI right. The White House, for example, issued an executive order in October on the safe and responsible use of the technology.

There is a lot of momentum now, which is fantastic, Nyarko said. Are we there yet? No. Just as the technology evolves, the approach will have to evolve with it, but I think the conversations are happening, which is great.

Nyarko, who leads` Morgans Data Engineering and Predictive Analytics (DEPA) Research Lab, is working on ways to monitor the performance of cloud-based systems and whether they alter depending on variables such as a persons race or ethnicity.Hes also working on how to objectively measure the very nebulous concept of fairness could there be a consensus within the industry, for example, on benchmarks that everyone would use to test their systems performance?

Think about going to the grocery store and picking up a package with a nutrition label on it, Nyarko said. Its really clear when you pick it up you know what youre getting.

What would that look like for the AI model? Pick up a product and flip it over, so to speak, metaphorically see what its strengths are, what its weaknesses are, in what areas what groups are impacted one way or the other.

The centers staff and students ranging from undergrads to post-docs are working on multiple projects: A childs toy car is parked in one room, awaiting further work to make it self-driving. There are autonomous wheelchairs, being tested at Baltimore/Washington International Thurgood Marshall Airport, where hopefully one day they can be ordered like an Uber.

Waters, who directs the Cognitive and Neurodiversity AI Lab at Morgan, is working on applications to help in diagnosing autism and assist those with autism in developing skills. With much autism research based on a small pool, usually boys and particularly white boys, she is working on using AI to observe and track children of other racial and ethnic groups in their family settings, seeking to tease out cultural differences that may mask symptoms of autism.

She is also working on using augmented reality glasses and AI to develop individualized programs for those with autism. The glasses would put an overlay on the real environment, prompting and rewarding the wearer to be more vocal, for example, or using a cartoon character to point to a location they should go to, such as a bathroom.

Ulysses Muoz/Baltimore Sun

While the center works on projects that could find their way onto the marketplace, it maintains its focus on providing, as its mission statement puts it, thought leadership in the application of fair and unbiased technology.

One only has to look at previous technologies that took unexpected turns from their original intent, said J. Phillip Honenberger, who joined the center from Morgans philosophy and religious studies department.He specializes in the intersection of philosophy and science, and sees the centers work as an opportunity to get ahead of whatever unforeseen implications AI may have for our lives.

Any socially disruptive technology almost never gets sufficient deliberation and reflection, Honenberger said. They hit the market and start to affect peoples lives before people really have a chance to think about whats happening.

Look at the way social media affected the political space, Honenberger said. No one thought, he said, Were going to build this thing to connect people with their friends and family, and its going to change the outcome of elections, its going to lead to polarization and disinformation and all the other negative effects.

Technology tends to have a reflection and deliberation deficit, Honenberger said.

But, he said, that doesnt mean innovation should be stifled because it might lead to unintended consequences.

The solution is to build ethical capacity, build reflective and deliberative capacity, he said, and thats what were in the business of doing.

See more here:

At Morgan State, seeking AI that is both smart and fair - Baltimore Sun

Posted in Ai

Opinion | A.I. Use by Law Enforcement Must Be Strictly Regulated – The New York Times

One of the most hopeful proposals involving police surveillance emerged recently from a surprising quarter the federal Office of Management and Budget. The office, which oversees the execution of the presidents policies, has recommended sorely needed constraints on the use of artificial intelligence by federal agencies, including law enforcement.

The offices work is commendable, but shortcomings in its proposed guidance to agencies could still leave people vulnerable to harm. Foremost among them is a provision that would allow senior officials to seek waivers by arguing that the constraints would hinder law enforcement. Those law enforcement agencies should instead be required to provide verifiable evidence that A.I. tools they or their vendors use will not cause harm, worsen discrimination or violate peoples rights.

As scholars of algorithmic tools, policing and constitutional law, we have witnessed the predictable and preventable harms from law enforcements use of emerging technologies. These include false arrests and police seizures, including a family held at gunpoint, after people were wrongly accused of crimes because of the irresponsible use of A.I.-driven technologies including facial recognition and automated license plate readers.

Consider the cases of Porcha Woodruff, Michael Oliver and Robert Julian-Borchak Williams. All were arrested between 2019 and 2023 after they were misidentified by facial recognition technology. These arrests had indelible consequences: Ms. Woodruff was eight months pregnant when she was falsely accused of carjacking and robbery; Mr. Williams was arrested in front of his wife and two young daughters as he pulled into his driveway from work. Mr. Oliver lost his job as a result.

All are Black. This should not be a surprise. A 2018 study co-written by one of us (Dr. Buolamwini) found that three commercial facial-analysis programs from major technology companies showed both skin-type and gender biases. The darker the skin, the more often the errors arose. Questions of fairness and bias persist about the use of these sorts of technologies.

Errors happen because law enforcement deploys emerging technologies without transparency or community agreement that they should be used at all, with little or no consideration of the consequences, insufficient training and inadequate guardrails. Often the data sets that drive the technologies are infected with errors and racial bias. Typically, the officers or agencies face no consequences for false arrests, increasing the likelihood they will continue.

The Office of Management and Budget guidance, which is now being finalized after a period of public comment, would apply to law enforcement technologies such as facial recognition, license-plate readers, predictive policing tools, gunshot detection, social media monitoring and more. It sets out criteria for A.I. technologies that, without safeguards, could put peoples safety or well-being at risk or violate their rights. If these proposed minimum practices are not met, technologies that fall short would be prohibited after next Aug. 1.

Here are highlights of the proposal: Agencies must be transparent and provide a public inventory of cases in which A.I. was used. The cost and benefit of these technologies must be assessed, a consideration that has been altogether absent. Even if the technology provides real benefits, the risks to individuals especially in marginalized communities must be identified and reduced. If the risks are too high, the technology may not be used. The impact of A.I.-driven technologies must be tested in the real world, and be continually monitored. Agencies would have to solicit public comment before using the technologies, including from the affected communities.

The proposed requirements are serious ones. They should have been in place before law enforcement began using these emerging technologies. Given the rapid adoption of these tools, without evidence of equity or efficacy and with insufficient attention to preventing mistakes, we fully anticipate some A.I. technologies will not meet the proposed standards and their use will be banned for noncompliance.

The overall thrust of the federal A.I. initiative is to push for rapid use of untested technologies by law enforcement, an approach that too often fails and causes harm. For that reason, the Office of Management and Budget must play a serious oversight role.

Far and away, the most worrisome elements in the proposal are provisions that create the opportunity for loopholes. For example, the chief A.I. officer of each federal agency could waive proposed protections with nothing more than a justification sent to the Office of Management and Budget. Worse yet, the justification need only claim an unacceptable impediment to critical agency operations the sort of claim law enforcement regularly makes to avoid regulation.

This waiver provision has the potential to wipe away all that the proposal promises. No waiver should be permitted without clear proof that it is essential proof that in our experience law enforcement typically cannot muster. No one person should have the power to issue such a waiver. There must be careful review to ensure that waivers are legitimate. Unless the recommendations are enforced strictly, we will see more surveillance, more people forced into unjustified encounters with law enforcement, and more harm to communities of color. Technologies that are clearly shown to be discriminatory should not be used.

There is also a vague exception for national security, a phrase frequently used to excuse policing from legal protections for civil rights and against discrimination. National security requires a sharper definition to prevent the exemption from being invoked without valid cause or oversight.

Finally, nothing in this proposal applies beyond federal government agencies. The F.B.I., the Transportation Security Administration and other federal agencies are aggressively embracing facial recognition and other biometric technologies that can recognize individuals by their unique physical characteristics. But so are state and local agencies, which do not fall under these guidelines. The federal government regularly offers federal funding as a carrot to win compliance from state and local agencies with federal rules. It should do the same here.

We hope the Office of Management and Budget will set a higher standard at the federal level for law enforcements use of emerging technologies, a standard that state and local governments should also follow. It would be a shame to make the progress envisioned in this proposal and have it undermined by backdoor exceptions.

Joy Buolamwini is the founder of the Algorithmic Justice League, which seeks to raises awareness about the potential harms of artificial intelligence, and the author of Unmasking AI: My Mission to Protect What Is Human in a World of Machines. Barry Friedman is a professor at New York Universitys School of Law and the faculty director of its Policing Project. He is the author of Unwarranted: Policing Without Permission.

Read more from the original source:

Opinion | A.I. Use by Law Enforcement Must Be Strictly Regulated - The New York Times

Posted in Ai

UBS boosts AI revenue forecast by 40%, calls industry the ‘tech theme of the decade’ – CNBC

UBS is getting more bullish on the outlook for artificial intelligence and bracing for another prosperous year for the "tech theme of the decade." The firm lifted its revenue forecast for AI by 40%. UBS expects revenues to grow by 15 times, from $28 billion in 2022 to $420 billion by 2027, as companies invest in infrastructure for models and applications. In comparison, it took the smart devices industry more than 10 years for its revenues to grow by 15 times. "As a result, we believe AI will remain the key theme driving global tech stocks again in 2024 and the rest of the decade," wrote Sundeep Gantori, adding that AI growth could result in consolidation that favors the giants getting bigger and "industry leaders with deep pockets and first-mover advantages." AI was the trade to invest behind in 2023, boosting chipmaker Nvidia nearly 240% as Wall Street bet on its graphics processing units powering large language models. Other semiconductor stocks, including Advanced Micro Devices, Broadcom and Marvell Technology, also rallied on the theme, with the VanEck Semiconductor ETF (SMH) notching its second-best year on record with a 72% gain. The world may only be in the early innings of a yearslong AI wave, but UBS views semiconductors and software as the best areas to position in 2024. Both industries should post double-digit profit growth and operating margins exceeding 30%, and the 22% average for the global IT industry. "Semiconductors, while cyclical, are well positioned to benefit from solid near-term demand for AI infrastructure," the firm said. "Meanwhile, software, with broadening AI demand trends from applications and models, is a defensive play, thanks to its strong recurring revenue base." Within the semiconductor industry, UBS favors logic, memory, capital equipment and foundry names, while companies exposed to office productivity, cloud and models appear best situated in software. CNBC's Michael Bloom contributed reporting.

See more here:

UBS boosts AI revenue forecast by 40%, calls industry the 'tech theme of the decade' - CNBC

Posted in Ai

Intel Hires HPE’s Justin Hotard To Lead Data Center And AI Group – CRN

By becoming the leader of Intels Data Center and AI Group, former Hewlett Packard Enterprise executive Justin Hotard will take over a business that is fighting competition on multiple fronts, including against AMD in the x86 server CPU market and Nvidia in the AI computing space.

Intel said it has hired Hewlett Packard Enterprise rising star Justin Hotard to lead the companys prized Data Center and AI Group.

The Santa Clara, Calif.-based chipmaker said Wednesday that Hotard will become executive vice president and general manager of the business, effective Feb. 1. He will succeed Sandra Rivera, who moved to lead Intels Programmable Solutions Group as a new stand-alone business under the companys ownership on Monday.

Hotard was most recently executive vice president and general manager of high-performance computing, AI and labs at HPE, where he was responsible for delivering AI capabilities to customers addressing some of the worlds most complex problems through data-intensive workloads, according to the semiconductor giant.

By becoming the leader of Intels Data Center and AI Group, Hotard will take over a business that is fighting competition on multiple fronts: against AMD in the x86 server CPU market, against Nvidia, AMD and smaller firms in the AI computing space, and against the rise of Arm-based server chips from Ampere Computing, Amazon Web Services and Microsoft Azure.

Just last month, the Data Center and AI Group marked the launch of its fifth-generation Xeon processors, which the company said deliver AI acceleration in every core on top of outperforming AMDs latest EPYC chips around the clock. And the business is also fighting its way to win market share from Nvidia in the AI computing market with not just its Xeon CPUs but also its Gaudi accelerator chips and a differentiated software strategy.

The semiconductor giant is making these moves as part of Intel CEO Pat Gelsingers grand comeback plan, which seeks to put the company ahead of Asian contract chip manufacturers TSMC and Samsung in advanced chip-making capabilities by 2025 to unlock new momentum.

Justin is a proven leader with a customer-first mindset and has an impressive track record in driving growth and innovation in the data center and AI, Gelsinger in a statement.

Justin is committed to our vision to create world-changing technologies and passionate about the critical role Intel will play in empowering our customers for decades to come, he added.

Go here to read the rest:

Intel Hires HPE's Justin Hotard To Lead Data Center And AI Group - CRN

Posted in Ai

AI predictions for the new year – POLITICO – POLITICO

Mass General Brigham physicians are envisioning the future of AI in medicine. | Courtesy of Mass General Brigham

Will 2024 be the year that artificial intelligence transforms medicine?

Leaders at one of Americas top hospital systems, Mass General Brigham in Boston, might not go that far, but they have high hopes.

Their new years predictions span departments and specialties, some patient-facing and others for the back office.

Heres what they foresee:

Neurosurgery could see advancements in AI and machine learning, according to Dr. Omar Arnaout, a neurosurgeon at Brigham and Womens Hospital. The tech could better tailor treatment plans to patients, more accurately predict outcomes and add new precision to surgeries.

Radiologys continued integration of AI could revolutionize the accuracy of diagnostics and treatments, said Dr. Manisha Bahl, a physician investigator in Mass Generals radiology department. And she sees liquid biopsies taking on more of a role as AI makes it easier to detect biomarkers.

Patient chatbots will likely become more popular, according to Dr. Marc Succi, executive director of Mass General MESH Incubator, a center at the health system that, with Harvard Medical School, looks to create new approaches to health care. That could make triaging more efficient.

Smarter robots could even come to patient care because of AI, according to Randy Trumbower, director of the INSPIRE Lab, affiliated with Mass General Brigham. He and his team are studying semi-autonomous robots that use AI to better care for people with severe spinal cord injuries.

And AI tools themselves could see innovations that make them more appealing for medical use, Dr. Danielle Bitterman, an assistant professor at BWH and a faculty member on the Artificial Intelligence in Medicine program at Mass General Brigham, said. Breakthroughs could make AI systems more efficient and better at quickly incorporating current clinical information for the best patient care across specialties.

Granby, Colo. | Shawn Zeller/POLITICO

This is where we explore the ideas and innovators shaping health care.

Germans are taking more mental health days off work, Die Welt reports. The number hit a record in 2022 and doubled over the previous decade.

Share any thoughts, news, tips and feedback with Carmen Paun at [emailprotected], Daniel Payne at [emailprotected], Ruth Reader at [emailprotected] or Erin Schumaker at [emailprotected].

Send tips securely through SecureDrop, Signal, Telegram or WhatsApp.

Adopting new technology is as much a cultural issue as a technical one, the AMA says. | Anne-Christine Poujoulat/AFP via Getty Images

Health care providers can devise new ways to care for patients with digital tools, but the people building the tech and running hospitals need to be thoughtful about implementation.

All sides of the business must work together to ensure the success and safety of the new tech, including AI-driven tools, according to guidance from the American Medical Association.

Many hurdles standing in the way of digital health models arent technical but cultural and operational, the doctors group says.

To advance patient care and leverage technology along the way, the AMA says health care executives should:

Prepare to share more data. With regulators moving to safeguard the exchange of patient data, organizations can prepare to follow the rules even before a partnership forms.

Find common goals early. Once partnerships form, clarifying the purpose, value and concerns early on can improve prospects for successful implementation.

Make sure clinicians are in the loop. Builders of new data systems should keep the needs of doctors and nurses in mind to ensure the updates aid in patient care and dont get in the way.

Keep patients in mind. Patients who can access and use their health data are more engaged in their care.

Schistosomiasis affects at least 250 million people living in places without access to clean, safe drinking water and sanitation. | Marcus Perkins for Merck KGaA

Preschool children infected with schistosomiasis the second-most widespread tropical disease after malaria could finally have a treatment.

In mid-December, Europes drug regulator, the European Medicines Agency, endorsed Merck Europes Arpraziquantel, the first drug formulated specifically to treat small children who get the disease, caused by a parasitic worm that can remain in the body for many years and cause organ damage.

Some 50 million children ages 3 months to 6 years and mostly in Africa could benefit.

The European Medicines Agencys positive scientific opinion will streamline the drugs endorsement by the World Health Organization, which makes it easier for countries where the disease is endemic to register the new formulation for children.

Why it matters: Also known as bilharzia, schistosomiasis affects at least 250 million people living in places without access to clean, safe drinking water and sanitation. Its long been neglected by drugmakers.

The disease disables more than it kills, according to the WHO. In children, schistosomiasis can cause anemia, stunted growth and learning disabilities.

The effects are usually reversible through treatment with praziquantel, a drug developed in the 1970s, which Merck donates through WHO to 45 countries in sub-Saharan Africa.

The company provides up to 250 million tablets of praziquantel a year to treat school-aged children in the region, Johannes Waltz, head of Mercks Schistosomiasis Elimination Program, told Carmen. Our focus in the treatment is on school-aged children because the effect is the worst and theres hope that theres long-term effect if you treat regularly, he said.

The new formulation will make it easier to treat smaller children. They now receive part of a crushed praziquantel tablet, depending on how much they weigh.

Arpraziquantel is water-soluble. The taste is tolerable for kids, and it withstands hot environments, the European Medicines Agency said.

Original post:

AI predictions for the new year - POLITICO - POLITICO

Posted in Ai