Page 11234..1020..»

Category Archives: Ai

Chinese national arrested and charged with stealing AI trade secrets from Google – NPR

Posted: March 8, 2024 at 6:26 am

A former Google engineer was charged with stealing AI technology while secretly working with two China-based companies. Carl Court/Getty Images hide caption

A former Google engineer was charged with stealing AI technology while secretly working with two China-based companies.

A Chinese national who allegedly stole more than 500 files from Google with confidential information on the company's AI technology has been arrested and charged with stealing trade secrets, according to the Justice Department.

The defendant, former Google employee Linwei Ding, was arrested Wednesday morning in Newark, Calif. The 38-year-old faces four counts of theft of trade secrets. Prosecutors say at the same time that Ding was working for Google and stealing the building blocks of its AI technology, he was also secretly employed by two China-based tech companies.

"The Justice Department will not tolerate the theft of artificial intelligence and other advanced technologies that could put our national security at risk," Attorney General Merrick Garland said in a statement. "We will fiercely protect sensitive technologies developed in America from falling into the hands of those who should not have them."

The case is latest example of what American officials say is a relentless campaign by China to try to steal U.S. trade secrets, technology and intellectual property. Officials say China aims to use those stolen secrets to supplant the U.S. as the world's leading power.

"Today's charges are the latest illustration of the lengths affiliates of companies based in the People's Republic of China are willing to go to steal American innovation," said FBI Director Christopher Wray. "The theft of innovative technology and trade secrets from American companies can cost jobs and have devastating economic and national security consequences."

The U.S. is the global leader in AI, an emerging technology that could reshape many facets of modern life.

AI also could become an indispensable tool to help law enforcement protect public safety. But Justice Department officials also have warned of the potential dangers that AI poses to national security if it falls into the hands of criminals or hostile nation states.

The department has also formed a unit to protect advanced American technology such as AI from being pilfered by foreign adversaries.

In Ding's case, the indictment says the trade secrets he allegedly stole are related to "the hardware infrastructure and software platform that allow Google's supercomputing data centers to train large AI models through machine learning."

Google spokesperson Jose Castaneda said the company has "strict safeguards to prevent theft of our confidential commercial information and trade secrets."

"After an investigation, we found that this employee stole numerous documents, and we quickly referred the case to law enforcement," Castaneda said. "We are grateful to the FBI for helping protect our information and will continue cooperating with them closely."

The indictment says Ding was hired at Google as a software engineer in 2019. His work focused on the development of software related to machine learning and AI applications, according to prosecutors.

In May of 2022, Ding allegedly began uploading confidential informationmore than 500 unique files in allfrom Google's network into a personal Google Cloud account.

Prosecutors say Ding tried to hide what he was doing by copying the stolen files first into the Apple Notes application on his laptop, converting them into PDF files and uploading those into his personal Cloud account.

Less than a month later, court papers say, Ding received emails from the head of a Chinese technology company, Beijing Rongshu Lianzhi Technology, with an offer to be the company's chief technology officer.

Ding allegedly traveled to China to help raise money for the company, which worked on AI, and was announced as the company's CTO. A year later, Ding also allegedly founded his own technology company, Zhisuan, that also focused on AI and machine learning.

Prosecutors say Ding never informed Google of his ties to either Chinese company, and continued to be employed by Google.

Then in December 2023, court papers say, Google detected Ding trying to upload more files from the company's network to his personal account while he was in China. Ding allegedly told the company's investigator that he'd uploaded the files as evidence of his work for Google.

A week after being interviewed by the investigator, Ding allegedly booked a one-way ticket to Beijing. He then sent his resignation letter to Google. Shortly after that, the company learned of Ding's role with Zhisuan. Google then suspended his access to the company's networks.

Shortly after that, the FBI began its investigation.

See more here:

Chinese national arrested and charged with stealing AI trade secrets from Google - NPR

Posted in Ai | Comments Off on Chinese national arrested and charged with stealing AI trade secrets from Google – NPR

Revolutionize Your Business with AWS Generative AI Competency Partners | Amazon Web Services – AWS Blog

Posted: at 6:26 am

By Chris Dally, Business Designation Owner AWS By Victor Rojo, Technical Designation Lead AWS By Chris Butler, Sr. Product Manager, Launch AWS By Justin Freeman, Sr. Partner Development Specialist, Catalyst AWS

In todays rapidly evolving technology landscape, generative artificial intelligence (AI) is leading the charge in innovation, revolutionizing the way organizations work. According to a McKinsey report, generative AI could account for over 75% of total yearly AI value, with high expectations for major or disruptive change in industries. Additionally, the report states generative AI technologies have the potential to automate work activities that absorb 60-70% of employees time.

With the ability to automate tasks, enhance productivity, and enable hyper-personalized customer experiences, businesses are seeking specialized expertise to build a successful generative AI strategy.

To support this need, were excited to announce the AWS Generative AI Competencyan AWS Specialization that helps Amazon Web Services (AWS) customers more quickly adopt generative AI solutions and strategically position themselves for the future. AWS Generative AI Competency Partners provide a full range of services, tools, and infrastructurewith tailored solutions in areas like security, applications, and integrations to give customers flexibility and choice across models and technologies.

Partners play an important role in supporting AWS customers leveraging our comprehensive suite of generative AI services. We are excited to recognize and highlight partners with proven customer success with generative AI on AWS through the AWS Generative AI Competency, making it easier for our customers to find and identify the right partners to support their unique needs. ~ Swami Sivasubramanian, Vice President of Database, Analytics and ML, AWS

According to Canalys, AWS is the first to launch a Generative AI competency for partners. By validating the partners business and technical expertise in this way, AWS customers are able to invest with greater confidence in generative AI solutions from these partners. This new competency is a critical entry point into the Generative AI partner opportunity, which Canalys estimates will grow to US$158 billion by 2028.

Generative AI has truly ushered in a new era of innovation and transformative value across both business and technology. A recent Canalys study found that 87% of customers rank partner specializations as a top three selection criteria. With the AWS Generative AI Competency launch, were helping customers take advantage of the capabilities that our technically validated Generative AI Partners have to offer. ~ Ruba Borno, Vice President of AWS Worldwide Channels and Alliances

Leveraging AI technologies such as Amazon Bedrock, Amazon SageMaker JumpStart, AWS Trainium, AWS Inferentia, and accelerated computing instances on Amazon Elastic Compute Cloud (Amazon EC2), AWS Generative AI Competency Partners have deep expertise building and deploying groundbreaking applications across industries, including healthcare and life sciences, media and entertainment, public sector, and financial services.

We invite you to explore the following AWS Generative AI Competency Launch Partner offerings recommended by AWS.

These AWS Partners have deep expertise working with businesses to help them adopt and strategize generative AI, build and test generative AI applications, train and customize foundation models, operate, support, and maintain generative AI applications and models, protect generative AI workloads, and define responsible AI principles and frameworks.

These AWS Partners utilize foundation models (FMs) and related technologies to automate domain-specific functions, enhancing customer differentiation across all business lines and operations. Partners fall into three categories: Generative AI applications, Foundation Models and FM-based Application Development, and Infrastructure and Data.

AWS Generative AI Competency Partners make it easier for customers to innovate with enterprise-grade security and privacy, foundation models, generative AI-powered applications, a data-first approach, and a high-performance, low-cost infrastructure.

Explore the AWS Generative AI Partners page to learn more.

AWS Partners with Generative AI offerings can learn more about becoming an AWS Competency Partner.

AWS Specialization Partners gain access to strategic and confidential content, including product roadmaps, feature release previews, and demos, as part of the AWS PartnerEquip event series. To attend live events in your region or tune in virtually, register for an upcoming session. In addition to AWS Specialization Program benefits, AWS Generative AI Competency Partners receive unique benefits such as bi-annual strategy sessions to aid joint sales motions. To learn more, review the AWS Specialization Program Benefits Guide in AWS Partner Central (login required).

AWS Partners looking to get their Generative AI offering validated through the AWS Competency Program must be validated or differentiated members of the Software or Services Path prior to applying.

To apply, please review the Program Guide and access the application in AWS Partner Central.

Follow this link:

Revolutionize Your Business with AWS Generative AI Competency Partners | Amazon Web Services - AWS Blog

Posted in Ai | Comments Off on Revolutionize Your Business with AWS Generative AI Competency Partners | Amazon Web Services – AWS Blog

Micron Hits Record High With Analysts Calling It an ‘Under-Appreciated AI Beneficiary’ – Investopedia

Posted: at 6:26 am

Key Takeaways

Micron Technology(MU) shares rose to a record high Thursday as analysts from Goldman Sachs and Stifel raised their price targets on the stock, citing the company's position amid the artificial intelligence (AI) boom.

Shares of Micron closed 3.6% higher at $98.98 Thursday, contributing to a more than 20% increase since the start of 2024.

Goldman Sachs analysts raised their price target for Micron to $112 from $103 with a "buy" rating, saying that the company is an "under-appreciated AI beneficiary."

"We believe Micron is well-positioned to benefit from the proliferation of AI across data centers (i.e. the core) and the edge (e.g. PCs, smartphones) as demand for more compute drives an increase in content," they said.

The analysts noted that the stock's year-to-date gains were more muted compared to those of some of its peers in the compute and networking space, nodding to Nvidia (NVDA) and Arm (ARM). Nvidia shares have nearly doubled while Arm shares have more than doubled in value since the start of 2024.

Stifel analysts indicated that the firm believes consensus estimates are "wrong and too low," adding that it anticipates Micron "breaking out to higher highs, perhaps aided by the most compelling growth-valuation ratio amongst larger cap 'AI' relevant stocks."

The analysts upgraded the stock to a "buy" rating from "hold" and increased its price targetto $120 from $80.

Stifel said that Micron's position amid the AI boom drove the stock upgrade. Generative AI (GenAI) needs high bandwidth memory (HBM), "and Micron now has a seat at the table," Stifel analysts wrote.

Micron announced in February that it began mass production of an HBM chip for Nvidias AIgraphic processing units (GPUs), bolstering its position in the AI ecosystem.

See the rest here:

Micron Hits Record High With Analysts Calling It an 'Under-Appreciated AI Beneficiary' - Investopedia

Posted in Ai | Comments Off on Micron Hits Record High With Analysts Calling It an ‘Under-Appreciated AI Beneficiary’ – Investopedia

The Adams administration quietly hired its first AI czar. Who is he? – City & State New York

Posted: at 6:26 am

New York City has quietly filled the role of director of artificial intelligence and machine learning, City & State has learned. In mid-January, Jiahao Chen, a former director of AI research at JPMorgan Chase and the founder of independent consulting company Responsible AI LLC, took on the role, which has been described by the citys Office of Technology and Innovation as spearheading the citys comprehensive AI strategy.

Despite Mayor Eric Adams administration publicizing the position last January, Chens hiring nearly a year later came without any fanfare or even an announcement. The first mention of Chen as director of AI came in a press release sent out by the Office of Technology and Innovation on Thursday morning, announcing next steps in the citys AI Action Plan. OTI Director of AI and Machine Learning Jiahao Chen will manage implementation of the Action Plan, the press release noted.

New York City previously had an AI director under former Mayor Bill de Blasios administration. Neal Parikh served as the citys director of AI under the office of former Chief Technology Officer John Paul Farmer, which released a citywide AI strategy in 2021. Under de Blasio, the city also had an algorithms management and policy officer to guide the city in the development, responsible use and assessment of algorithmic tools, which can include AI and machine learning. The old CTOs office and the work of the algorithms officer was consolidated along with the citys other technology-related offices into the new Office of Technology and Innovation at the outset of the Adams administration.

The Adams administration has referred to its own director of AI and machine learning as a new role, however, and has suggested that the position will be more empowered, in part because it is under the larger, centralized Office of Technology and Innovation. According to the job posting last January, which noted a $75,000 to $140,000 pay range, the director will be responsible for helping agencies use AI and machine learning tools responsibly, consulting with agencies on questions about AI use and governance, and serving as a subject matter expert on citywide policy and planning, among other things. How the role will actually work in practice remains to be seen.

The Adams administrations AI action plan was published in October, and isa 37-point road map aimed at helping the city responsibly harness the power of AI for good. On Thursday, the Office of Technology and Innovation announced the first update on the action plan, naming members of an advisory network that will consult on the citys work. That list includes former City Council Member Marjorie Velzquez, who is now vice president of policy at Tech:NYC. The office also released a set of AI principles and definitions, and guidance on generative AI.

OTI spokesperson Ray Legendre said that an offer for the position of director of AI was extended to Chen before the citys hiring freeze began last October. The office did not explicitly address why Chens hiring wasnt announced when he started the role. Over the past two months, Jiahao has been a key part of our ongoing efforts to implement the AI Action Plan, Legendre wrote in an email. Our focus at OTI over the past few months has been on making progress on the Action Plan which is what we announced today.

According to the website for Responsible AI LLC, Chens independent consulting company, Chens resume includes stints in academia as well as the private sector, including as a senior manager of data science at Capital One, and as director of AI research at JPMorgan Chase.

After City & State inquired about Chens role, Chen confirmed it on X, writing I can finally talk about my new job!

Visit link:

The Adams administration quietly hired its first AI czar. Who is he? - City & State New York

Posted in Ai | Comments Off on The Adams administration quietly hired its first AI czar. Who is he? – City & State New York

Artificial intelligence and illusions of understanding in scientific research – Nature.com

Posted: at 6:25 am

Crabtree, G. Self-driving laboratories coming of age. Joule 4, 25382541 (2020).

Article CAS Google Scholar

Wang, H. et al. Scientific discovery in the age of artificial intelligence. Nature 620, 4760 (2023). This review explores how AI can be incorporated across the research pipeline, drawing from a wide range of scientific disciplines.

Article CAS PubMed Google Scholar

Dillion, D., Tandon, N., Gu, Y. & Gray, K. Can AI language models replace human participants? Trends Cogn. Sci. 27, 597600 (2023).

Article PubMed Google Scholar

Grossmann, I. et al. AI and the transformation of social science research. Science 380, 11081109 (2023). This forward-looking article proposes a variety of ways to incorporate generative AI into social-sciences research.

Article CAS PubMed Google Scholar

Gil, Y. Will AI write scientific papers in the future? AI Mag. 42, 315 (2022).

Google Scholar

Kitano, H. Nobel Turing Challenge: creating the engine for scientific discovery. npj Syst. Biol. Appl. 7, 29 (2021).

Article PubMed PubMed Central Google Scholar

Benjamin, R. Race After Technology: Abolitionist Tools for the New Jim Code (Oxford Univ. Press, 2020). This book examines how social norms about race become embedded in technologies, even those that are focused on providing good societal outcomes.

Broussard, M. More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech (MIT Press, 2023).

Noble, S. U. Algorithms of Oppression: How Search Engines Reinforce Racism (New York Univ. Press, 2018).

Bender, E. M., Gebru, T., McMillan-Major, A. & Shmitchell, S. On the dangers of stochastic parrots: can language models be too big? in Proc. 2021 ACM Conference on Fairness, Accountability, and Transparency 610623 (Association for Computing Machinery, 2021). One of the first comprehensive critiques of large language models, this article draws attention to a host of issues that ought to be considered before taking up such tools.

Crawford, K. Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (Yale Univ. Press, 2021).

Johnson, D. G. & Verdicchio, M. Reframing AI discourse. Minds Mach. 27, 575590 (2017).

Article Google Scholar

Atanasoski, N. & Vora, K. Surrogate Humanity: Race, Robots, and the Politics of Technological Futures (Duke Univ. Press, 2019).

Mitchell, M. & Krakauer, D. C. The debate over understanding in AIs large language models. Proc. Natl Acad. Sci. USA 120, e2215907120 (2023).

Article PubMed PubMed Central Google Scholar

Kidd, C. & Birhane, A. How AI can distort human beliefs. Science 380, 12221223 (2023).

Article CAS PubMed Google Scholar

Birhane, A., Kasirzadeh, A., Leslie, D. & Wachter, S. Science in the age of large language models. Nat. Rev. Phys. 5, 277280 (2023).

Article Google Scholar

Kapoor, S. & Narayanan, A. Leakage and the reproducibility crisis in machine-learning-based science. Patterns 4, 100804 (2023).

Article PubMed PubMed Central Google Scholar

Hullman, J., Kapoor, S., Nanayakkara, P., Gelman, A. & Narayanan, A. The worst of both worlds: a comparative analysis of errors in learning from data in psychology and machine learning. In Proc. 2022 AAAI/ACM Conference on AI, Ethics, and Society (eds Conitzer, V. et al.) 335348 (Association for Computing Machinery, 2022).

Rudin, C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1, 206215 (2019). This paper articulates the problems with attempting to explain AI systems that lack interpretability, and advocates for building interpretable models instead.

Article PubMed PubMed Central Google Scholar

Crockett, M. J., Bai, X., Kapoor, S., Messeri, L. & Narayanan, A. The limitations of machine learning models for predicting scientific replicability. Proc. Natl Acad. Sci. USA 120, e2307596120 (2023).

Article CAS PubMed PubMed Central Google Scholar

Lazar, S. & Nelson, A. AI safety on whose terms? Science 381, 138 (2023).

Article PubMed Google Scholar

Collingridge, D. The Social Control of Technology (St Martins Press, 1980).

Wagner, G., Lukyanenko, R. & Par, G. Artificial intelligence and the conduct of literature reviews. J. Inf. Technol. 37, 209226 (2022).

Article Google Scholar

Hutson, M. Artificial-intelligence tools aim to tame the coronavirus literature. Nature https://doi.org/10.1038/d41586-020-01733-7 (2020).

Article PubMed Google Scholar

Haas, Q. et al. Utilizing artificial intelligence to manage COVID-19 scientific evidence torrent with Risklick AI: a critical tool for pharmacology and therapy development. Pharmacology 106, 244253 (2021).

Article CAS PubMed Google Scholar

Mller, H., Pachnanda, S., Pahl, F. & Rosenqvist, C. The application of artificial intelligence on different types of literature reviews a comparative study. In 2022 International Conference on Applied Artificial Intelligence (ICAPAI) https://doi.org/10.1109/ICAPAI55158.2022.9801564 (Institute of Electrical and Electronics Engineers, 2022).

van Dinter, R., Tekinerdogan, B. & Catal, C. Automation of systematic literature reviews: a systematic literature review. Inf. Softw. Technol. 136, 106589 (2021).

Article Google Scholar

Aydn, . & Karaarslan, E. OpenAI ChatGPT generated literature review: digital twin in healthcare. In Emerging Computer Technologies 2 (ed. Aydn, .) 2231 (zmir Akademi Dernegi, 2022).

AlQuraishi, M. AlphaFold at CASP13. Bioinformatics 35, 48624865 (2019).

Article CAS PubMed PubMed Central Google Scholar

Jumper, J. et al. Highly accurate protein structure prediction with AlphaFold. Nature 596, 583589 (2021).

Article CAS PubMed PubMed Central Google Scholar

Lee, J. S., Kim, J. & Kim, P. M. Score-based generative modeling for de novo protein design. Nat. Computat. Sci. 3, 382392 (2023).

Article CAS Google Scholar

Gmez-Bombarelli, R. et al. Design of efficient molecular organic light-emitting diodes by a high-throughput virtual screening and experimental approach. Nat. Mater. 15, 11201127 (2016).

Article PubMed Google Scholar

Krenn, M. et al. On scientific understanding with artificial intelligence. Nat. Rev. Phys. 4, 761769 (2022).

Article PubMed PubMed Central Google Scholar

Extance, A. How AI technology can tame the scientific literature. Nature 561, 273274 (2018).

Article CAS PubMed Google Scholar

Hastings, J. AI for Scientific Discovery (CRC Press, 2023). This book reviews current and future incorporation of AI into the scientific research pipeline.

Ahmed, A. et al. The future of academic publishing. Nat. Hum. Behav. 7, 10211026 (2023).

Article PubMed Google Scholar

Gray, K., Yam, K. C., ZhenAn, A. E., Wilbanks, D. & Waytz, A. The psychology of robots and artificial intelligence. In The Handbook of Social Psychology (eds Gilbert, D. et al.) (in the press).

Argyle, L. P. et al. Out of one, many: using language models to simulate human samples. Polit. Anal. 31, 337351 (2023).

Article Google Scholar

Aher, G., Arriaga, R. I. & Kalai, A. T. Using large language models to simulate multiple humans and replicate human subject studies. In Proc. 40th International Conference on Machine Learning (eds Krause, A. et al.) 337371 (JMLR.org, 2023).

Binz, M. & Schulz, E. Using cognitive psychology to understand GPT-3. Proc. Natl Acad. Sci. USA 120, e2218523120 (2023).

Article CAS PubMed PubMed Central Google Scholar

Ornstein, J. T., Blasingame, E. N. & Truscott, J. S. How to train your stochastic parrot: large language models for political texts. Github, https://joeornstein.github.io/publications/ornstein-blasingame-truscott.pdf (2023).

He, S. et al. Learning to predict the cosmological structure formation. Proc. Natl Acad. Sci. USA 116, 1382513832 (2019).

Article MathSciNet CAS PubMed PubMed Central Google Scholar

Mahmood, F. et al. Deep adversarial training for multi-organ nuclei segmentation in histopathology images. IEEE Trans. Med. Imaging 39, 32573267 (2020).

Article PubMed PubMed Central Google Scholar

Teixeira, B. et al. Generating synthetic X-ray images of a person from the surface geometry. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 90599067 (Institute of Electrical and Electronics Engineers, 2018).

Marouf, M. et al. Realistic in silico generation and augmentation of single-cell RNA-seq data using generative adversarial networks. Nat. Commun. 11, 166 (2020).

Article CAS PubMed PubMed Central Google Scholar

Watts, D. J. A twenty-first century science. Nature 445, 489 (2007).

Article CAS PubMed Google Scholar

boyd, d. & Crawford, K. Critical questions for big data. Inf. Commun. Soc. 15, 662679 (2012). This article assesses the ethical and epistemic implications of scientific and societal moves towards big data and provides a parallel case study for thinking about the risks of artificial intelligence.

Article Google Scholar

Jolly, E. & Chang, L. J. The Flatland fallacy: moving beyond lowdimensional thinking. Top. Cogn. Sci. 11, 433454 (2019).

Article PubMed Google Scholar

Yarkoni, T. & Westfall, J. Choosing prediction over explanation in psychology: lessons from machine learning. Perspect. Psychol. Sci. 12, 11001122 (2017).

Article PubMed PubMed Central Google Scholar

Radivojac, P. et al. A large-scale evaluation of computational protein function prediction. Nat. Methods 10, 221227 (2013).

Article CAS PubMed PubMed Central Google Scholar

Bileschi, M. L. et al. Using deep learning to annotate the protein universe. Nat. Biotechnol. 40, 932937 (2022).

Article CAS PubMed Google Scholar

Barkas, N. et al. Joint analysis of heterogeneous single-cell RNA-seq dataset collections. Nat. Methods 16, 695698 (2019).

Article CAS PubMed PubMed Central Google Scholar

Demszky, D. et al. Using large language models in psychology. Nat. Rev. Psychol. 2, 688701 (2023).

Article Google Scholar

Karjus, A. Machine-assisted mixed methods: augmenting humanities and social sciences with artificial intelligence. Preprint at https://arxiv.org/abs/2309.14379 (2023).

Davies, A. et al. Advancing mathematics by guiding human intuition with AI. Nature 600, 7074 (2021).

Article CAS PubMed PubMed Central Google Scholar

Peterson, J. C., Bourgin, D. D., Agrawal, M., Reichman, D. & Griffiths, T. L. Using large-scale experiments and machine learning to discover theories of human decision-making. Science 372, 12091214 (2021).

Article CAS PubMed Google Scholar

Ilyas, A. et al. Adversarial examples are not bugs, they are features. Preprint at https://doi.org/10.48550/arXiv.1905.02175 (2019)

Semel, B. M. Listening like a computer: attentional tensions and mechanized care in psychiatric digital phenotyping. Sci. Technol. Hum. Values 47, 266290 (2022).

Article Google Scholar

Gil, Y. Thoughtful artificial intelligence: forging a new partnership for data science and scientific discovery. Data Sci. 1, 119129 (2017).

Read this article:

Artificial intelligence and illusions of understanding in scientific research - Nature.com

Posted in Ai | Comments Off on Artificial intelligence and illusions of understanding in scientific research – Nature.com

Analysis | House AI task force leaders take long view on regulating the tools – The Washington Post

Posted: at 6:25 am

Happy Thursday! Maybe this year we'll get lucky and the State of the Union will feature a discussion of intermediary liability or duty of care standards. Send predictions and observations to: cristiano.lima@washpost.com.

House leaders took a key step toward hatching a game plan on artificial intelligence last month by launching a new bipartisan task force, which will issue recommendations for how Congress could boost AI innovation while keeping the tools in check.

But the lawmakers leading the effort told The Technology 202 in a joint interview that implementing a full response will probably be a lengthy undertaking as they consider the technologys vast impact across elections, national security, the economy and more.

Rep. Jay Obernolte (R-Calif.), who was tapped by House leaders to chair the group, pointed to Europes efforts to agree on a comprehensive AI law as a cautionary tale.

If you look at the attempts in Europe to create an omnibus bill for the regulation of AI, you'll see some of the fallacies in that, said Obernolte, one of the few lawmakers with computer science bona fides. They've had to rewrite that bill several times as the face of AI has changed.

We dont envision a 5,000 page bill that deals with 57 topics and then were done with AI, said Rep. Ted Lieu (D-Calif.), the task forces co-chair. Its going to be a multiyear process, and there'll be a variety of different bills that try to tackle different aspects of AI.

The task force is set to release a report by the end of year, but that doesnt preclude more immediate legislative action on discrete issues, Obernolte and Lieu said.

Like Senate Majority Leader Charles E. Schumer (D-N.Y.), Obernolte pointed to the risks that AI-generated content poses to elections as one area with potential for fast action.

There should be broad bipartisan agreement that no one should be allowed to impersonate a candidate with AI so we're going to be looking at what we can do to tighten up the regulations to try and prevent that, he said.

Lieu seconded the sentiment and floated the idea of criminal and civil enhancements to make fines or jail time steeper if certain crimes are perpetrated using AI.

One way to provide more deterrence is to say, look, if you use AI to impersonate a voice that defrauds someone, [that] would enhance the punishment that you may get, he said.

Obernolte said hes hopeful that Congress will prioritize taking up the Create AI Act, which aims to fully stand up the National Artificial Intelligence Research Resource (NAIRR). The White House in January launched a pilot version of the center, which is set to run for two years.

In the Senate, Schumer has come under fire from some of his colleagues for keeping his series of AI insight forums closed to the public. (In response, he has noted that the chamber has held many public committee hearings on AI over the years.)

In the House, Obernolte and Lieu said they are planning to have both public and private sessions to dig into the many facets of AI.

We want to have open meetings in a traditional hearing format to make sure that we're being transparent with the public, Obernolte said. But we're also going to have some closed meetings because it's very important to me that everyone feels comfortable asking questions that could come off as ignorant.

While Schumers bipartisan AI working group has yet to unveil any proposals or legislative text, he predicted in June that there would be action from the Senate in months not years.

House leaders, meanwhile, did not launch the task force until nearly a year after Schumer unveiled his plans, prompting concern from some members that the chamber was absent from the debate.

Obernolte and Lieu pushed back on those suggestions.

Were going to chip away at this over the next several years, and we can do that because there are short-term harms, medium-term harms and long-term harms that need to be mitigated, Obernolte said. I don't think that that's inconsistent with what the Senate is doing at all.

Their offices have had informal contacts over the last year with the leaders of Schumers working group, he added, but said they are very aware that we want to work with them and I think theyre very open to working with us. Lieu agreed: Were just getting started.

Thats all for today thank you so much for joining us! Make sure to tell others to subscribe toThe Technology 202 here. Get in touch with Cristiano (via email or social media) and Will (via email or social media) for tips, feedback or greetings!

View post:

Analysis | House AI task force leaders take long view on regulating the tools - The Washington Post

Posted in Ai | Comments Off on Analysis | House AI task force leaders take long view on regulating the tools – The Washington Post

Don’t Give Your Business Data to AI Companies – Dark Reading

Posted: at 6:25 am

COMMENTARY

Artificial intelligence (AI) is challenging our preexisting ideas of what's possible with technology. AI's transformative potential could upend a variety of diverse tasks and business scenarios by applying computer vision and large vision models (LVMs) to usher in a new age of efficiency and innovation.

Yet, as businesses embrace the promises of AI, they encounter a common peril: Every AI company seems to have an insatiable appetite for the world's precious data. These companies are eager to train their proprietary AI models using any available images and videos, employing tactics that sometimes involve inconveniencing users, likeCAPTCHAsmaking you identify traffic lights. Unfortunately, this clandestine approach has become the standard playbook for many AI providers, enticing customers to unwittingly surrender their data and intellectual contributions, only to be monetized by these companies.

This isn't an isolated incident confined to a singlebad applein the industry. Even well-known companies such asDropboxandGitHubhave faced accusations. And whileZoomhas sinceZoom has shifted its stanceon data privacy, such exceptions merely underscore the norm within the industry.

Handing over your business data to AI companies comes with inherent risks. Why should you help train models that may ultimately benefit your competitors? Moreover, in instances where the application of AI could contribute to societal well-being such as identifying wildfires or enhancing public safety why should such data be confined to the exclusive benefit of a few tech giants? The potential benefits of freely sharing and collaboratively improving such data should be harnessed by communities worldwide, not sequestered within the vaults of a select few tech corporations.

To address these concerns,transparencyis the key. AI companies should be obligated to clearly outline how they intend to use your data and for what specific purposes. This transparency will empower businesses to make informed decisions about the fate of their data and guard against exploitative practices.

In addition, businesses should maintain control over how their data is used. Granting AI companies unrestricted access risks unintended consequencesand compromises privacy. Companies must be able to assert their authority in dictating the terms under which their data is used, ensuring alignment with their values and objectives.

Permission should be nonnegotiable. AI companies must seekexplicit consentfrom businesses before utilizing their data. This not only upholds ethical standards but also establishes a foundation of trust between companies and AI providers.

Lastly, businesses aren't just data donors; they are contributors to the development and refinement of AI models. They deserve compensationfor the use of their data. A fair and equitable system should be in place, acknowledging the value businesses bring to the further development of AI models.

The responsibility lies with businesses to safeguard their data and interests. A collective demand for transparency, control, permission, and fair compensation can pave the way for an era in which AI benefits businesses and society at large, fostering collaboration and innovation while safeguarding against the pitfalls of unchecked data exploitation.

Don't surrender your business data blindly demand a future where AI works for you, not the other way around.

Link:

Don't Give Your Business Data to AI Companies - Dark Reading

Posted in Ai | Comments Off on Don’t Give Your Business Data to AI Companies – Dark Reading

NIST, the lab at the center of Bidens AI safety push, is decaying – The Washington Post

Posted: at 6:25 am

At the National Institute of Standards and Technology the government lab overseeing the most anticipated technology on the planet black mold has forced some workers out of their offices. Researchers sleep in their labs to protect their work during frequent blackouts. Some employees have to carry hard drives to other buildings; flaky internet wont allow for the sending of large files.

And a leaky roof forces others to break out plastic sheeting.

If we knew rain was coming, wed tarp up the microscope, said James Fekete, who served as chief of NISTs applied chemicals and materials division until 2018. It leaked enough that we were prepared.

NIST is at the heart of President Bidens ambitious plans to oversee a new generation of artificial intelligence models; through an executive order, the agency is tasked with developing tests for security flaws and other harms. But budget constraints have left the 123-year-old lab with a skeletal staff on key tech teams and most facilities on its main Gaithersburg, Md., and Boulder, Colo., campuses below acceptable building standards.

Interviews with more than a dozen current and former NIST employees, Biden administration officials, congressional aides and tech company executives, along with reports commissioned by the government, detail a massive resources gap between NIST and the tech firms it is tasked with evaluating a discrepancy some say risks undermining the White Houses ambitious plans to set guardrails for the burgeoning technology. Many of the people spoke to The Washington Post on the condition of anonymity because they were not authorized to speak to the media.

Even as NIST races to set up the new U.S. AI Safety Institute, the crisis at the degrading lab is becoming more acute. On Sunday, lawmakers released a new spending plan that would cut NISTs overall budget by more than 10 percent, to $1.46 billion. While lawmakers propose to invest $10 million in the new AI institute, thats a fraction of the tens of billions of dollars tech giants like Google and Microsoft are pouring into the race to develop artificial intelligence. It pales in comparison to Britain, which has invested more than $125 million into its AI safety efforts.

The cuts to the agency are a self-inflicted wound in the global tech race, said Divyansh Kaushik, the associate director for emerging technologies and national security at the Federation of American Scientists.

Some in the AI community worry that underfunding NIST makes it vulnerable to industry influence. Tech companies are chipping in for the expensive computing infrastructure that will allow the institute to examine AI models. Amazon announced that it would donate $5 million in computing credits. Microsoft, a key investor in OpenAI, will provide engineering teams along with computing resources. (Amazon Jeff Bezos owns The Post.)

Tech executives, including OpenAI CEO Sam Altman, are regularly in communication with officials at the Commerce Department about the agencys AI work. OpenAI has lobbied NIST on artificial intelligence issues, according to federal disclosures. NIST asked TechNet an industry trade group whose members include OpenAI, Google and other major tech companies if its member companies can advise the AI Safety Institute.

NIST is also seeking feedback from academics and civil society groups on its AI work. The agency has a long history of working with a variety of stakeholders to gather input on technologies, Commerce Department spokesman Charlie Andrews said.

AI staff, unlike their more ergonomically challenged colleagues, will be working in well-equipped offices in the Gaithersburg campus, the Commerce Departments D.C. office and the NIST National Cybersecurity Center of Excellence in Rockville, Md., Andrews said.

White House spokeswoman Robyn Patterson said the appointment of Elizabeth Kelly to the helm of the new AI Safety Institute underscores the White Houses commitment to getting this work done right and on time. Kelly previously served as special assistant to the president for economic policy.

The Biden-Harris administration has so far met every single milestone outlined by the presidents landmark executive order, Patterson said. We are confident in our ability to continue to effectively and expeditiously meet the milestones and directives set forth by President Biden to protect Americans from the potential risks of AI systems while catalyzing innovation in AI and beyond.

NISTs financial struggles highlight the limitations of the administrations plan to regulate AI exclusively through the executive branch. Without an act of Congress, there is no new funding for initiatives like the AI Safety Institute and the programs could be easily overturned by the next president. And as the presidential elections approach, the prospects of Congress moving on AI in 2024 are growing dim.

During his State of the Union address on Thursday, Biden called on Congress to harness the promise of AI and protect us from its peril.

Congressional aides and former NIST employees say the agency has not been able to break through as a funding priority even as lawmakers increasingly tout its role in addressing technological developments, including AI, chips and quantum computing.

After this article published, Senate Majority Leader Charles E. Schumer (D-N.Y.) on Thursday touted the $10 million investment in the institute in the proposed budget, saying he fought for this funding to make sure that the development of AI prioritizes both innovation and safety.

A review of NISTs safety practices in August found that the budgetary issues endanger employees, alleging that the agency has an incomplete and superficial approach to safety.

Chronic underfunding of the NIST facilities and maintenance budget has created unsafe work conditions and further fueled the impression among researchers that safety is not a priority, said the NIST safety commission report, which was commissioned following the 2022 death of an engineering technician at the agencys fire research lab.

NIST is one of the federal governments oldest science agencies with one of the smallest budgets. Initially called the National Bureau of Standards, it began at the dawn of the 20th century, as Congress realized the need to develop more standardized measurements amid the expansion of electricity, the steam engine and railways.

The need for such an agency was underscored three years after its founding, when fires ravaged through Baltimore. Firefighters from Washington, Philadelphia and even New York rushed to help put out the flames, but without standard couplings, their hoses couldnt connect to the Baltimore hydrants. The firefighters watched as the flames overtook more than 70 city blocks in 30 hours.

NIST developed a standard fitting, unifying more than 600 different types of hose couplings deployed across the country at the time.

Ever since, the agency has played a critical role in using research and science to help the country learn from catastrophes and prevent new ones. Its work expanded after World War II: It developed an early version of the digital computer, crucial Space Race instruments and atomic clocks, which underpin GPS. In the 1950s and 1960s, the agency moved to new campuses in Boulder and Gaithersburg after its early headquarters in Washington fell into disrepair.

Now, scientists at NIST joke that they work at the most advanced labs in the world in the 1960s. Former employees describe cutting-edge scientific equipment surrounded by decades-old buildings that make it impossible to control the temperature or humidity to conduct critical experiments.

You see dust everywhere because the windows dont seal, former acting NIST director Kent Rochford said. You see a bucket catching drips from a leak in the roof. You see Home Depot dehumidifiers or portable AC units all over the place.

The flooding was so bad that Rochford said he once requested money for scuba gear. That request was denied, but he did receive funding for an emergency kit that included squeegees to clean up water.

Pests and wildlife have at times infiltrated its campuses, including an incident where a garter snake entered a Boulder building.

More than 60 percent of NIST facilities do not meet federal standards for acceptable building conditions, according to a February 2023 report commissioned by Congress from the National Academies of Sciences, Engineering and Medicine. The poor conditions impact employee output. Workarounds and do-it-yourself repairs reduce the productivity of research staff by up to 40 percent, according to the committees interviews with employees during a laboratory visit.

Years after Rochfords 2018 departure, NIST employees are still deploying similar MacGyver-style workarounds. Each year between October and March, low humidity in one lab creates a static charge making it impossible to operate an instrument ensuring companies meet environmental standards for greenhouse gases.

Problems with the HVAC and specialized lights have made the agency unable to meet demand for reference materials, which manufacturers use to check whether their measurements are accurate in products like baby formula.

Facility problems have also delayed critical work on biometrics, including evaluations of facial recognition systems used by the FBI and other law enforcement agencies. The data center in the 1966 building that houses that work receives inadequate cooling, and employees there spend about 30 percent of their time trying to mitigate problems with the lab, according to the academies reports. Scheduled outages are required to maintain the data centers that hold technology work, knocking all biometric evaluations offline for a month each year.

Fekete, the scientist who recalled covering the microscope, said his teams device never completely stopped working due to rain water.

But other NIST employees havent been so lucky. Leaks and floods destroyed an electron microscope worth $2.5 million used for semiconductor research, and permanently damaged an advanced scale called a Kibble balance. The tool was out of commission for nearly five years.

Despite these constraints, NIST has built a reputation as a natural interrogator of swiftly advancing AI systems.

In 2019, the agency released a landmark study confirming facial recognition systems misidentify people of color more often than White people, casting scrutiny on the technologys popularity among law enforcement. Due to personnel constraints, only a handful of people worked on that project.

Four years later, NIST released early guidelines around AI, cementing its reputation as a government leader on the technology. To develop the framework, the agency connected with leaders in industry, civil society and other groups, earning a strong reputation among numerous parties as lawmakers began to grapple with the swiftly evolving technology.

The work made NIST a natural home for the Biden administrations AI red-teaming efforts and the AI Safety Institute, which were formalized in the November executive order. Vice President Harris touted the institute at the U.K. AI Safety Summit in November. More than 200 civil society organizations, academics and companies including OpenAI and Google have signed on to participate in a consortium within the institute.

OpenAI spokeswoman Kayla Wood said in a statement that the company supports NISTs work, and that the company plans to continue to work with the lab to "support the development of effective AI oversight measures.

Under the executive order, NIST has a laundry list of initiatives that it needs to complete by this summer, including publishing guidelines for how to red-team AI models and launching an initiative to guide evaluating AI capabilities. In a December speech at the machine learning conference NeurIPS, the agencys chief AI adviser, Elham Tabassi, said this would be an almost impossible deadline.

It is a hard problem, said Tabassi, who was recently named the chief technology officer of the AI Safety Institute. We dont know quite how to evaluate AI.

The NIST staff has worked tirelessly to complete the work it is assigned by the AI executive order, said Andrews, the Commerce spokesperson.

While the administration has been clear that additional resources will be required to fully address all of the issues posed by AI in the long term, NIST has been effectively carrying out its responsibilities under the [executive order] and is prepared to continue to lead on AI-related research and other work, he said.

Commerce Secretary Gina Raimondo asked Congress to allocate $10 million for the AI Safety Institute during an event at the Atlantic Council in January. The Biden administration also requested more funding for NIST facilities, including $262 million for safety, maintenance and repairs. Congressional appropriators responded by cutting NISTs facilities budget.

The administrations ask falls far below the recommendations of the national academies study, which urged Congress to provide $300 to $400 million in additional annual funding over 12 years to overcome a backlog of facilities damage. The report also calls for $120 million to $150 million per year for the same period to stabilize the effects of further deterioration and obsolescence.

Ross B. Corotis, who chaired the academies committee that produced the facilities report, said Congress needs to ensure that NIST is funded because it is the go-to lab when any new technology emerges, whether thats chips or AI.

Unless youre going to build a whole new laboratory for some particular issue, youre going to turn first to NIST, Corotis said. And NIST needs to be ready for that.

Eva Dou and Nitasha Tiku contributed to this report.

Read the original post:

NIST, the lab at the center of Bidens AI safety push, is decaying - The Washington Post

Posted in Ai | Comments Off on NIST, the lab at the center of Bidens AI safety push, is decaying – The Washington Post

Mapping Disease Trajectories from Birth to Death with AI – Neuroscience News

Posted: at 6:25 am

Summary: Researchers mapped disease trajectories from birth to death, analyzing over 44 million hospital stays in Austria to uncover patterns of multimorbidity across different age groups.

Their groundbreaking study identified 1,260 distinct disease trajectories, revealing critical moments where early and personalized prevention could alter a patients health outcome significantly. For instance, young men with sleep disorders showed two different paths, indicating varying risks for developing metabolic or movement disorders later in life.

These insights provide a powerful tool for healthcare professionals to implement targeted interventions, potentially easing the growing healthcare burden due to an aging population and improving individuals quality of life.

Key Facts:

Source: CSH

The world population is aging at an increasing pace. According to the World Health Organization (WHO), in 2023, one in six people were over 60 years old. By 2050, the number of people over 60 is expected to double to 2.1 billion.

As age increases, the risk of multiple, often chronic diseases occurring simultaneouslyknown as multimorbiditysignificantly rises, explainsElma Dervicfrom theComplexity Science Hub (CSH). Given the demographic shift we are facing, this poses several challenges.

On one hand, multimorbidity diminishes the quality of life for those affected. On the other hand, this demographic shift creates a massive additional burden for healthcare and social systems.

Identifying typical disease trajectories

We wanted to find out which typical disease trajectories occur in multimorbid patients from birth to death and which critical moments in their lives significantly shape the further course. This provides clues for very early and personalized prevention strategies, explains Dervic.

Together with researchers from the Medical University of Vienna, Dervic analyzed all hospital stays in Austria between 2003 and 2014, totaling around 44 million. To make sense of this vast amount of data, the team constructed multilayered networks. A layer represents each ten-year age group, and each diagnosis is represented by nodes within these layers.

Using this method, the researchers were able to identify correlations between different diseases among different age groups for example, how frequently obesity, hypertension, and diabetes occur together in 20-29-year-olds and which diseases have a higher risk of occurring after them in the 30s, 40s or 50s.

The team identified 1,260 different disease trajectories (618 in women and 642 in men) over a 70-year period. On average, one of these disease trajectories includes nine different diagnoses, highlighting how common multimorbidity actually is, emphasizes Dervic.

Critical moments

In particular, 70 trajectories have been identified where patients exhibited similar diagnoses in their younger years, but later evolved into significantly different clinical profiles.

If these trajectories, despite similar starting conditions, significantly differ later in life in terms of severity and the corresponding required hospitalizations, this is a critical moment that plays an important role in prevention, says Dervic.

Menwith sleep disorders

The model, for instance, shows two typical trajectory paths for men between 20 and 29 years old who suffer from sleep disorders. In trajectory A, metabolic diseases such as diabetes mellitus, obesity, and lipid disorders appear years later. In trajectory B, movement disorders occur, among other conditions.

This suggests that organic sleep disorders could be an early marker for the risk of developing neurodegenerative diseases such as Parkinsons disease.

If someone suffers from sleep disorders at a young age, that can be a critical event prompting doctors attention, explains Dervic.

The results of the study show that patients who follow trajectory B spend nine days less in hospital in their 20s but 29 days longer in hospital in their 30s and also suffer from more additional diagnoses. As sleep disorders become more prevalent, the distinction in the course of their illnesses not only matters for those affected but also for the healthcare system.

Women with high blood pressure

Similarly, when adolescent girls between the ages of ten and nineteen have high blood pressure, their trajectory varies as well. While some develop additional metabolic diseases, others experience chronic kidney disease in their twenties, leading to increased mortality at a young age.

This is of particular clinical importance as childhood hypertension is on the rise worldwide and is closely linked to the increasing prevalence of childhood obesity.

There are specific trajectories that deserve special attention and should be monitored closely, according to the authors of the study.

With these insights derived from real-life data, doctors can monitor various diseases more intensively and implement targeted, personalized preventive measures decades before serious problems arise, explains Dervic.

By doing so, they are not only reducing the burden on healthcare systems, but also improving patients quality of life.

Author: Eliza Muto Source: CSH Contact: Eliza Muto CSH Image: The image is credited to Neuroscience News

Original Research: Open access. Unraveling cradle-to-grave disease trajectories from multilayer comorbidity networks by Elma Dervic et al. npj Digital Medicine

Abstract

Unraveling cradle-to-grave disease trajectories from multilayer comorbidity networks

We aim to comprehensively identify typical life-spanning trajectories and critical events that impact patients hospital utilization and mortality. We use a unique dataset containing 44 million records of almost all inpatient stays from 2003 to 2014 in Austria to investigate disease trajectories.

We develop a new, multilayer disease network approach to quantitatively analyze how cooccurrences of two or more diagnoses form and evolve over the life course of patients. Nodes represent diagnoses in age groups of ten years; each age group makes up a layer of the comorbidity multilayer network.

Inter-layer links encode a significant correlation between diagnoses (p<0.001, relative risk>1.5), while intra-layers links encode correlations between diagnoses across different age groups. We use an unsupervised clustering algorithm for detecting typical disease trajectories as overlapping clusters in the multilayer comorbidity network.

We identify critical events in a patients career as points where initially overlapping trajectories start to diverge towards different states. We identified 1260 distinct disease trajectories (618 for females, 642 for males) that on average contain 9 (IQR 26) different diagnoses that cover over up to 70 years (mean 23 years).

We found 70 pairs of diverging trajectories that share some diagnoses at younger ages but develop into markedly different groups of diagnoses at older ages. The disease trajectory framework can help us to identify critical events as specific combinations of risk factors that put patients at high risk for different diagnoses decades later.

Our findings enable a data-driven integration of personalized life-course perspectives into clinical decision-making.

See more here:

Mapping Disease Trajectories from Birth to Death with AI - Neuroscience News

Posted in Ai | Comments Off on Mapping Disease Trajectories from Birth to Death with AI – Neuroscience News

SAP enhances Datasphere and SAC for AI-driven transformation – CIO

Posted: at 6:25 am

SAP announced today a host of new AI copilot and AI governance features for SAP Datasphere and SAP Analytics Cloud (SAC). Jurgen Mueller, SAP CTO and executive board member, called the innovations, which includes an expanded partnership with data governance specialist Collibra, a quantum leap in the companys ability to help customers drive intelligent business transformation through data.

SAP is executing on a roadmap that brings an important semantic layer to enterprise data, and creates the critical foundation for implementing AI-based use cases, said analyst Robert Parker, SVP of industry, software, and services research at IDC.

SAP unveiled Datasphere a year ago as a comprehensive data service, built on SAP Business Technology Platform (BTP), to provide a unified experience for data integration, data cataloging, semantic modeling, data warehousing, data federation, and data virtualization. At SAP Dataspheres core is the concept of the business data fabric, a data management architecture delivering an integrated, semantically rich data layer over the existing data landscape, and providing seamless and scalable access to data without duplication while retaining business context and logic.

With todays announcements, SAP is building on that vision. The company is expanding its partnership with Collibra to integrate Collibras AI Governance platform with SAP data assets to facilitate data governance for non-SAP data assets in customer environments.

We have cataloging inside Datasphere: It allows you to catalog, manage metadata, all the SAP data assets were seeing, said JG Chirapurath, chief marketing and solutions officer for SAP. We are also seeing customers bringing in other data assets from other apps or data sources. In this model, it doesnt make sense for us to say our catalog has to understand all of these corpuses or data. Collibra does a fantastic job of understanding it.

The expanded partnership gives customers the ability to use Collibra as a catalog of catalogs, with Dataspheres catalog also managed by the Collibra platform.

Go here to see the original:

SAP enhances Datasphere and SAC for AI-driven transformation - CIO

Posted in Ai | Comments Off on SAP enhances Datasphere and SAC for AI-driven transformation – CIO

Page 11234..1020..»