Page 9«..891011..2030..»

Category Archives: Artificial Intelligence

How AI-generated deepfakes threaten the 2024 election – Journalist’s Resource

Posted: February 20, 2024 at 6:55 pm

Facebook Twitter LinkedIn Reddit Email

Last month, arobocall impersonating U.S. President Joe Biden went out to New Hampshire voters, advising them not to vote in the states presidential primary election.The voice, generated by artificial intelligence, sounded quite real.

Save your vote for the November election, the voice stated, falsely asserting that a vote in the primary would prevent voters from being able to participate in the November general election.

The robocall incident reflects a growing concern that generative AI will make it cheaper and easier to spread misinformation and run disinformation campaigns. The Federal Communications Commission last week issued a ruling to make AI-generated voices in robocalls illegal.

Deepfakes already have affected other elections around the globe. In recent elections in Slovakia, for example, AI-generated audio recordings circulated on Facebook, impersonating a liberal candidate discussing plans to raise alcohol prices and rig the election. During the February 2023 Nigerian elections, an AI-manipulated audio clip falsely implicated a presidential candidate in plans to manipulate ballots. With elections this year in over 50 countries involving half the globes population, there are fears deepfakes could seriously undermine their integrity.

Media outlets including the BBC and the New York Times sounded the alarm on deepfakes as far back as 2018. However, in past elections, including the 2022 U.S. midterms, the technology did not produce believable fakes and was not accessible enough, in terms of both affordability and ease of use, to be weaponized for political disinformation. Instead, those looking to manipulate media narratives relied on simpler and cheaper ways to spread disinformation, including mislabeling or misrepresenting authentic videos, text-based disinformation campaigns, or just plain old lying on air.

As Henry Ajder, a researcher on AI and synthetic media writes in a 2022 Atlantic piece, Its far more effective to use a cruder form of media manipulation, which can be done quickly and by less sophisticated actors, than to release an expensive, hard-to-create deepfake, which actually isnt going to be as good a quality as you had hoped.

As deepfakes continually improve in sophistication and accessibility, they will increasingly contribute to the deluge of informational detritus. Theyre already convincing. Last month, The New York Times published an online test inviting readers to look at 10 images and try to identify which were real and which were generated by AI, demonstrating first-hand the difficulty of differentiating between real and AI-generated images. This was supported by multiple academic studies, which found that faces of white people created by AI systems were perceived as more realistic than genuine photographs, New York Times reporter Stuart A. Thompson explained.

Listening to the audio clip of the fake robocall that targeted New Hampshire voters, it is difficult to distinguish from Bidens real voice.

The jury is still out on how generative AI will impact this years elections. In a December blog post on GatesNotes, Microsoft co-founder Bill Gates estimates we are still 18-24 months away from significant levels of AI use by the general population in high-income countries. In a December post on her website Anchor Change, Katie Harbath, former head of elections policy at Facebook, predicts that although AI will be used in elections, it will not be at the scale yet that everyone imagines.

It may, therefore, not be deepfakes themselves, but the narrative around them that undermines election integrity. AI and deepfakes will be firmly in the public consciousness as we go to the polls this year, with their increased prevalence supercharged by outsized media coverage on the topic. In her blog post, Harbath adds that its the narrative of what havoc AI could have that will have the bigger impact.

Those engaging in media manipulation can exploit the public perception that deepfakes are everywhere to undermine trust in information. These people use false claims and discredit true ones by exploiting the liars dividend.

The liars dividend, a term coined by legal scholars Robert Chesney and Danielle Keats Citron in a 2018 California Review article, suggests that as the public becomes more aware about the idea that video and audio can be convincingly faked, some will try to escape accountability for their actions by denouncing authentic audio and video as deepfakes.

Fundamentally, it captures the spirit of political strategist Steve Bannons strategy to flood the zone with shit, as he stated in a 2018 meeting with journalist Michael Lewis.

As journalist Sean Illing comments in a 2020 Vox article, this tactic is part of a broader strategy to create widespread cynicism about the truth and the institutions charged with unearthing it, and, in doing so, erode the very foundation of liberal democracy.

There are already notable examples of the liars dividend in political contexts. In recent elections in Turkey, a video tape surfaced showing compromising images of a candidate. In response, the candidate claimed the video was a deepfake when it was, in fact, real.

In April 2023, an Indian politician claimed that audio recordings of him criticizing members of his party were AI-generated. But a forensic analysis suggested at least one of the recordings was authentic.

Kaylyn Jackson Schiff, Daniel Schiff, and Natalia Buen, researchers who study the impacts of AI on politics, carry out experiments to understand the impacts of the liars dividend on audiences. In an article forthcoming in the American Political Science Review, they note that in refuting authentic media as fake, bad actors will either blame their political opposition or an uncertain information environment.

Their findings suggest that the liars dividend becomes more powerful as people become more familiar with deepfakes. In turn, media consumers will be primed to dismiss legitimate campaign messaging. It is therefore imperative for the public to be confident that we can differentiate between real and manipulated media.

Journalists have a crucial role to play in responsible reporting on AI. Widespread news coverage of the Biden robocalls and recent Taylor Swift deepfakes demonstrate that distorted media can be debunked, due to the resources of governments, technology professionals, journalists, and, in the case of Swift, an army of superfans.

This reporting should be balanced with a healthy dose of skepticism on the impact of AI in this years elections. Self-interested technology vendors will be prone to overstate its impact. AI may be a stalking horse for broader dis- and misinformation campaigns exploiting worsening integrity issues on these platforms.

Lawmakers across states have introduced legislation to combat election-related AI-generated dis- and misinformation. These bills would require disclosure of the use of AI for election-related content in Alaska, Florida, Colorado, Hawaii, South Dakota, Massachusetts, Oklahoma, Nebraska, Indiana, Idaho and Wyoming. Most of the bills would require that information to be disclosed within specific time frames before elections. A bill in Nebraska would ban all deepfakes within 60 days of an election.

However, the introduction of these bills does not necessarily mean they will become law. Furthermore, their enforceability could be challenged on the grounds of free speech, based on positioning AI-generated content as satire. Moreover, penalties would only occur after the fact or be evaded by foreign entities.

Social media companies hold the most influence in limiting the spread of false content, being able to detect and remove it from their platforms. However, the policies of major platforms, including Facebook, YouTube, and TikTok state they will only remove manipulated content for cases of egregious harm or if it aims to mislead people about voting processes. This is in line with a general relaxation in moderation standards, including repeals of 17 policies at the former three companies related to hate speech, harassment and misinformation in the last year.

Their primary response to AI-generated content will be to label it as AI-generated. For Facebook, YouTube and TikTok, this will apply to all AI-generated content, whereas for X (formally Twitter), these labels will apply to content identified as misleading media, as noted in recent policy updates.

This puts the onus on users to recognize these labels, which are not yet rolled out and will take time to adjust to. Furthermore, AI-generated content may evade the detection of already overstretched moderation teams and not be removed or labeled, creating false security for users. Moreover, with the exception of X (formerly Twitter)s policy these labels do not specify whether a piece of content is harmful, only that it is AI-generated.

A deepfake made purely for comedic purposes would be labeled, but a manually altered video spreading disinformation might not. Recent recommendations from the oversight board of Meta, the company formerly known as Facebook, advise that instead of focusing on how a distorted image, video or audio clip was created, the companys policy should focus on the harm manipulated posts can cause.

The continued emergence of deepfakes is worrying, but they represent a new weapon in the arsenal of disinformation tactics deployed by bad actors rather than a new frontier. The strategies to mitigate the damage they cause are the same as before developing and enforcing responsible platform design and moderation, underpinned by legal mandates where feasible, coupled with journalists and civic society holding the platforms accountable. These strategies are now more important than ever.

Go here to see the original:

How AI-generated deepfakes threaten the 2024 election - Journalist's Resource

Posted in Artificial Intelligence | Comments Off on How AI-generated deepfakes threaten the 2024 election – Journalist’s Resource

Clarivate Launches Enhanced Search Powered by Generative … – Clarivate

Posted: August 10, 2023 at 7:25 pm

Latest iteration of patent pending platform enables life sciences and pharmaceutical companies to access insights from billions of proprietary data points

London, U.K. August 9, 2023. Clarivate Plc (NYSE:CLVT), a global leader in connecting people and organizations to intelligence they can trust to transform their world, today launched its new enhanced search platform leveraging generative artificial intelligence (GenAI). GenAI has the potential to yield efficiencies across the entire Life Sciences & Healthcare value chain. The new Clarivate offering enables drug discovery, preclinical, clinical, regulatory affairs and portfolio strategy teams to interact with multiple complex datasets using natural language to obtain immediate and in-depth insights.

Rapid, accurate insights are challenged by a typical paradigm of disparate, siloed data sources. Many standard databases and companies have focused use cases and the ability to track scientific innovation from start to finish is complex, costly, and inefficient. The new Clarivate enhanced search platform addresses these obstacles by pairing billions of proprietary data points and over 100 years of deep industry and domain expertise with GenAI capabilities. By integrating vast content sets and analytics from solutions, including Cortellis Competitive Intelligence, Disease Landscape & Forecast and Drug Timelines and Success Rates (DTSR) into the new interactive platform, users can access harmonized data featuring precise, concise and immediate answers to the life science industrys most urgent questions.

Researchers can access and interrogate epidemiologic, scientific, clinical, commercial and research data within one platform to overcome barriers to enabling evidence-based decisions and complex analyses. Derived from advanced GenAI and data science techniques that algorithmically process high-value curated content, users can identify companies developing breakthrough therapies, anticipate medical advancements and understand market dynamics, essential for the advancement of bringing new therapies to market. Additional features and functionalities include among others:

The beta version of the enhanced search platform has been launched with select customers to optimize the platform for use by broader audiences by discovering new use cases, exploring UI / UX capabilities, obtaining and incorporating feedback and previewing new features and functionality. Commercialization is anticipated later this year, with plans to extend the knowledge base by integrating additional datasets from solutions, including: Cortellis Clinical Trials Intelligence, Cortellis Deals Intelligence, OFF-X Safety Intelligence, Cortellis Drug Discovery Intelligence, Cortellis Regulatory Intelligence and others. Clarivate will continue to evolve the platform with technical enhancements, heightened search capabilities, data mapping and AI model updates in the near term.

Henry Levy, President, Life Sciences and Healthcare, Clarivate, said: There is a growing need for data to support complex analyses and evidence-based decisions in the life sciences. As an early adopter of AI technology, Clarivate utilizes billions of proprietary best-in-class data assets to enable researchers to optimize treatment development from early-stage drug discovery through commercialization. The new Clarivate GenAI enhanced search platform utilizes human expertise, billions of proprietary best-in-class expertly curated and interconnected data assets, and advanced AI models to enhance decision-making, advance research, and boost clinical and commercial success across the entire drug, device and medical technology lifecycle.

As a provider of best-in-class data integration/deidentified patient solutions and a premier end-to-end research intelligence solution, Clarivate is committed to comprehensively supporting customers across the entire drug, device or diagnostic product lifecycle to help them advance human health. Our continuing investment in artificial intelligence (AI) and machine learning (ML) supports the industrys ever-growing need to engage patients, physicians and payers in new ways, navigate barriers to access and adherence, and address patient unmet needs.

To learn more about the Clarivate enhanced search platform, contact: LSHGenAI@clarivate.com.

# # #

About Clarivate

Clarivate is a leading global information services provider. We connect people and organizations to intelligence they can trust to transform their perspective, their work and our world. Our subscription and technology-based solutions are coupled with deep domain expertise and cover the areas of Academia & Government, Life Sciences & Healthcare and Intellectual Property. For more information, please visit http://www.clarivate.com.

Media contact:

Catherine Daniel Director External Communications, Life Sciences & Healthcare newsroom@clarivate.com

See the original post:

Clarivate Launches Enhanced Search Powered by Generative ... - Clarivate

Posted in Artificial Intelligence | Comments Off on Clarivate Launches Enhanced Search Powered by Generative … – Clarivate

University of North Florida Launches Artificial Intelligence & Machine … – Fagen wasanni

Posted: at 7:25 pm

The University of North Florida (UNF) is offering a six-month bootcamp to teach students the skills needed to master Artificial Intelligence (AI) and Machine Learning or DevOps. With both skill sets in high demand, these bootcamps provide a great opportunity for those interested in learning about this emerging technology.

Partnering with Fullstack Academy, UNF has designed these bootcamp programs to be completed online over a span of 26 weeks. Students will learn the concepts and theoretical information about AI and machine learning, and then have the opportunity to apply those concepts through hands-on training.

The job market for AI and machine learning professionals in the United States is projected to grow by 22% by 2030, according to the U.S. Bureau of Labor Statistics. Additionally, the AI industry has the potential to contribute $15.7 trillion to the global economy by 2035, as reported by PwC. With such promising growth and opportunities, these bootcamps offer a pathway to a high-paying skillset.

In Jacksonville alone, there are currently 190 job openings for Artificial Intelligence Engineer positions, many of which offer remote or hybrid work options, with entry-level positions paying up to $178,000 annually.

The AI and Machine Learning Bootcamp will start on September 11, and the DevOps program will start on August 28. The application deadlines are September 5 and August 22, respectively.

One of the unique aspects of these bootcamp programs is the availability of career success coaches who will assist students with developing their resume, creating LinkedIn profiles, and attending networking events with potential employers. Upon completion of the programs, students will receive a UNF digital credential that can be shared with employers to showcase their certified skills.

The cost of the bootcamp programs is $13,000, but scholarships, loans, and payment plans are available for those in need of financial assistance.

Original post:

University of North Florida Launches Artificial Intelligence & Machine ... - Fagen wasanni

Posted in Artificial Intelligence | Comments Off on University of North Florida Launches Artificial Intelligence & Machine … – Fagen wasanni

Why Hawaii Should Take The Lead On Regulating Artificial … – Honolulu Civil Beat

Posted: at 7:25 pm

A new state office of AI Safety and Regulation could take a risk-based approach to regulating various AI products.

Not a day passes without a major news headline on the great strides being made on artificial intelligence and warnings from industry insiders, academics and activists about the potentially very serious risks from AI.

A 2023survey of AI expertsfound that 36% fear that AI development may result in a nuclear-level catastrophe. Almost 28,000 people have signed on to anopen letterwritten by the Future of Life Institute, including Steve Wozniak, Elon Musk, the CEOs of several AI companies and many other prominent technologists, asking for a six-month pause or a moratorium on new advanced AI development.

As a public policy lawyer and also a researcher in consciousness (I have a part-time position at UC Santa Barbaras META Lab I share these strong concerns about the rapid development of AI, and I am a co-signer of the Future of Life open letter.

Why are we all so concerned? In short: AI development is going way too fast and its not being regulated.

The key issue is the profoundly rapid improvement in the new crop of advanced chatbots, or what are technically called large language models such as ChatGPT, Bard, Claude 2, and many others coming down the pike.

The pace of improvement in these AIs is truly impressive. This rapidaccelerationpromises to soon result in artificial general intelligence, which is defined as AI that is as good or better on almost anything a human can do.

When AGI arrives, possibly in the near future but possibly in a decade or more, AI will be able toimproveitself with no human intervention. It will do this in the same way that, for example, GooglesAlphaZeroAI learned in 2017 how to play chess better than even the very best human or other AI chess players in just nine hours from when it was first turned on. It achieved this feat by playing itself millions of times over.

In testing GPT-4, it performed better than90%of human test takers on the Uniform Bar Exam, a standardized test used to certify lawyers for practice in many states. That figure was up from just 10% in the previous GPT-3.5 version, which was trained on a smaller data set. They found similar improvements in dozens of other standardized tests.

Most of these tests are tests of reasoning, not of regurgitated knowledge. Reasoning is perhaps the hallmark of general intelligence so even todays AIs are showing significant signs of general intelligence.

This pace of change is why AI researcher Geoffrey Hinton, formerly with Google for a number of years,toldtheNew York Times: Look at how it was five years ago and how it is now. Take the difference and propagate it forwards. Thats scary.

In a mid-May Senate hearing on the potential of AI, Sam Altman, the head of OpenAI called regulation crucial. But Congress has done almost nothing on AI since then and the White House recently issued a letter applauding a purely voluntary approach adopted by the major AI development companies like Google and OpenAI.

A voluntary approach on regulating AI safety is like asking oil companies to voluntarily ensure their products keep us safe from climate change.

With the AI explosion underway now, and with artificial general intelligence perhaps very close, we may have just onechanceto get it right in terms of regulating AI to ensure its safe.

Im working with Hawaii state legislators to create a new Office of AI Safety and Regulation because the threat is so immediate that it requires significant and rapid action. Congress is working on AI safety issues but it seems that Congress is simply incapable of acting rapidly enough given the scale of this threat.

The new office would follow the precautionary principle in placing the burden on AI developers to demonstrate that their products are safe for Hawaii before they are allowed to be used in Hawaii. The current approach by regulators is to allow AI companies to simply release their products to the public, where theyre being adopted at record speed, with literally no proof of safety.

We cant afford to wait for Congress to act.

The new Hawaii office of AI Safety and Regulation would then take a risk-based approach to regulating various AI products. This means that the office staff, with public input, would assess the potential dangers of each AI product type and would impose regulations based on the potential risk. So less risky products would be subject to lighter regulation and more risky AI products would face more burdensome regulation.

My hope is that this approach will help to keep Hawaii safe from the more extreme dangers posed by AI which another recent open letter, signed by hundreds of AI industry leaders and academics, warned should be considered as dangerous as nuclear war or pandemics.

Hawaii can and should lead the way on a state-level approach to regulating these dangers. We cant afford to wait for Congress to act and it is all but certain that anything Congress adopts will be far too little and too late.

Sign Up

Sorry. That's an invalid e-mail.

Thanks! We'll send you a confirmation e-mail shortly.

Read more:

Why Hawaii Should Take The Lead On Regulating Artificial ... - Honolulu Civil Beat

Posted in Artificial Intelligence | Comments Off on Why Hawaii Should Take The Lead On Regulating Artificial … – Honolulu Civil Beat

WCTC To Offer New Certificates In Artificial Intelligence And Data … – Patch

Posted: at 7:25 pm

PEWAUKEE, Wis. (Thursday, Aug 10, 2023) Starting in the fall semester, Waukesha County Technical College will add three new information technology certificates two in artificial intelligence and one in data analytics -- and build upon the robust IT offerings available within the School of Business.

These include:

These certificates really complement our existing Data and Analytics Specialist program, said Alli Jerger, associate dean of Information Technology. Students in the Data and Analytics program, or those who may be pursuing certificates within that area, will find that they can rapidly move toward an AI certificate.

The College began its initial research for AI programming more than two years ago, Jerger said. With input from business and industry representatives who would need employees with these skillsets in the near future, WCTC began developing the AI certificates in fall of 2022 with the goal of launching them in August 2023.

Because AI is an emerging field, business and industry leaders have also been working to determine how their companies can leverage AI and ensure employment opportunities for graduates, Jerger said.

Employers told us that they will be looking for people who can help identify the right data to feed into AI tools, and who can help that data and the results that come from AI tell a story, she said. These new certificates are just the beginning of what WCTC plans to offer for AI programming, Jerger said. The College is creating a full Associate of Applied Science degree in AI, which will be launched in fall of 2024 (pending approval); credits from the AI certificates have been designed to transfer into that degree program.

Read more from the original source:

WCTC To Offer New Certificates In Artificial Intelligence And Data ... - Patch

Posted in Artificial Intelligence | Comments Off on WCTC To Offer New Certificates In Artificial Intelligence And Data … – Patch

Artificial Intelligence related patent filings increased in the … – Pharmaceutical Technology

Posted: at 7:25 pm

Notably, the number of artificial intelligence-related patent applications in the pharmaceutical industry was 70 in Q2 2023, versus 46 in the prior quarter.

Analysis of patenting activity by companies shows that Koninklijke Philips filed the most artificial intelligence patents within the pharmaceutical industry in Q2 2023. The company filed 13 artificial intelligence-related patents in the quarter. It was followed by Japanese Foundation for Cancer Research with 2 artificial intelligence patent filings, Syqe Medical (2 filings), and Hangzhou DAC Biotech (2 filings) in Q2 2023.

The largest share of artificial intelligence related patent filings in the pharmaceutical industry in Q2 2023 was in the US with 50%, followed by China (23%) and Japan (3%). The share represented by the US was 15% lower than the 65% share it accounted for in Q1 2023.

To further understand GlobalData's analysis on Artificial Intelligence (AI) in Drug Discovery Market - Thematic Research buy the report here.

This content was updated on 2 August 2023

Get industry leading news, data and analysis delivered to your inbox

GlobalData, the leading provider of industry intelligence, provided the underlying data, research, and analysis used to produce this article.

GlobalDatas Patent Analytics tracks patent filings and grants from official offices around the world. Textual analysis and official patent classifications are used to group patents into key thematic areas and link them to specific companies across the worlds largest industries.

See original here:

Artificial Intelligence related patent filings increased in the ... - Pharmaceutical Technology

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence related patent filings increased in the … – Pharmaceutical Technology

The Great 8-bit Debate of Artificial Intelligence – HPCwire

Posted: at 7:25 pm

Editors Note: Users often ask What separates HPC from AI, they both do a lot of number crunching? While this statement is true, one big difference is the precision required for a valid answer. HPC often requires the highest possible precision (i.e. 64-bit double precision floating point), while many AI applications actually work with 8-bit integers or floating point numbers. The use of less precision often allows faster CPU/GPU mathematics and a good enough result for many AI applications. The following article explains the trend toward lower precision computing in AI.

A grand competition of numerical representation is shaping up as some companies promote floating point data types in deep learning, while others champion integer data types.

Artificial intelligence (AI) is proliferating into every corner of our lives. The demand for products and services powered by AI algorithms has skyrocketed alongside the popularity of large language models (LLMs) like ChatGPT, and image generation models like Stable Diffusion. With this increase in popularity, however, comes an increase in scrutiny over the computational and environmental costs of AI, and particularly the subfield of deep learning.

The primary factors influencing the costs of deep learning are the size and structure of the deep learning model, the processor it is running on, and the numerical representation of the data. State-of-the-art models have been growing in size for years now, with the compute requirements doubling every 6-10 months [1] for the last decade. Processor compute power has increased as well, but not nearly fast enough to keep up with the growing costs of the latest AI models. This has led researchers to delve deeper into numerical representation in attempts to reduce the cost of AI. Choosing the right numerical representation, or data type, has incredible implications on the power consumption, accuracy, and throughput of a given model. There is, however, no singular answer to which data type is best for AI. Data type requirements vary between the two distinct phases of deep learning: the initial training phase and the subsequent inference phase.

When it comes to increasing AI efficiency, the method of first resort is quantization of the data type. Quantization reduces the number of bits required to represent the weights of a network. Reducing the number of bits not only makes the model smaller, but reduces the total computation time, and thus reduces the power required to do the computations. This is an essential technique for those pursuing efficient AI.

AI models are typically trained using single precision 32-bit floating point (FP32) data types. It was found, however, that all 32 bits arent always needed to maintain accuracy. Attempts at training models using half precision 16-bit floating point (FP16) data types showed early success, and the race to find the minimum number of bits that maintains accuracy was on. Google came out with their 16-bit brain float (BF16), and models being primed for inference were often quantized to 8-bit floating point (FP8) and integer (INT8) data types. There are two primary approaches to quantizing a neural network: Post-Training Quantization (PTQ) and Quantization-Aware Training (QAT). Both methods aim to reduce the numerical precision of the model to improve computational efficiency, memory footprint, and energy consumption, but they differ in how and when the quantization is applied, and the resulting accuracy.

Post-Training Quantization (PTQ) occurs after training a model with higher-precision representations (e.g., FP32 or FP16). It converts the models weights and activations to lower-precision formats (e.g., FP8 or INT8). Although simple to implement, PTQ can result in significant accuracy loss, particularly in low-precision formats, as the model isnt trained to handle quantization errors. Quantization-Aware Training (QAT) incorporates quantization during training, allowing the model to adapt to reduced numerical precision. Forward and backward passes simulate quantized operations, computing gradients concerning quantized weights and activations. Although QAT generally yields better model accuracy than PTQ, it requires training process modifications and can be more complex to implement.

The AI industry has begun coalescing around two preferred candidates for quantized data types: INT8 and FP8. Every hardware vendor seems to have taken a side. In mid 2022, a paper by Graphcore and AMD[2] floated the idea of an IEEE standard FP8 datatype. A subsequent joint paper with a similar proposal from Intel, Nvidia, and Arm[3] followed shortly. Other AI hardware vendors like Qualcomm[4, 5] and Untether AI[6] also wrote papers promoting FP8 and reviewing its merits versus INT8. But the debate is far from settled. While there is no singular answer for which data type is best for AI in general, there are superior and inferior data types when it comes to various AI processors and model architectures with specific performance and accuracy requirements.

Floating point and integer data types are two ways to represent and store numerical values in computer memory. There are a few key differences between the two formats that translate to advantages and disadvantages for various neural networks in training and inference.

The differences all stem from their representation. Floating point data types are used to represent real numbers, which include both integers and fractions. These numbers can be represented in scientific notation, with a base (mantissa) and an exponent.

On the other hand, integer data types are used to represent whole numbers (without fractions). The representations result in a very large difference in precision and dynamic range. Floating point numbers have a wider dynamic range then their integer counterparts. Integer numbers have a smaller range and can only represent whole numbers with a fixed level of precision.

In deep learning, the numerical representation requirements differ between the training and inference phases due to the unique computational demands and priorities of each stage. During the training phase, the primary focus is on updating the models parameters through iterative optimization, which typically necessitates higher dynamic range to ensure the accurate propagation of gradients and the convergence of the learning process. Consequently, floating-point representations, such as FP32, FP16, and even FP8 lately, should be employed during training to maintain sufficient dynamic range. On the other hand, the inference phase is concerned with the efficient evaluation of the trained model on new input data, where the priority shifts towards minimizing computational complexity, memory footprint, and energy consumption. In this context, lower-precision numerical representations, such as 8-bit integer (INT8) become an option in addition to FP8. The ultimate decision depends on the specific model and underlying hardware.

The best data type for inference will vary depending on the application and the target hardware. Real-time and mobile inference services tend to use the smaller 8-bit data types to reduce memory footprint, compute time, and energy consumption while maintaining enough accuracy.

FP8 is growing increasingly popular, as every major hardware vendor and cloud service provider has addressed its use in deep learning. There are three primary flavors of FP8, defined by the ratio of exponents to mantissa. Having more exponents increases the dynamic range of a data type, so FP8 E3M4 consisting of 1 sign bit, 3 exponent bits, and 4 mantissa bits, has the smallest dynamic range of the bunch. This FP8 representation sacrifices range for precision by having more bits reserved for mantissa, which increases the accuracy. FP8 E4M3 has an extra exponent, and thus a greater range. FP8 E5M2 has the highest dynamic range of the trio, making it the preferred target for training, which requires greater dynamic range. Having a collection of FP8 representations allows for a tradeoff between dynamic range and precision, as some inference applications would benefit from the increased accuracy offered by an extra mantissa bit.

INT8, on the other hand, effectively has 1 sign bit, 1 exponent bit, and 6 mantissa bits. This sacrifices much of its dynamic range for precision. Whether or not this translates into better accuracy compared to FP8 depends on the AI model in question. And whether or not it translates into better power efficiency will depend on the underlying hardware. Research from Untether AI research[6] shows that FP8 outperforms INT8 in terms of accuracy, and for their hardware, performance and efficiency as well. Alternatively, Qualcomm research [5] had found that the accuracy gains of FP8 are not worth the loss of efficiency compared to INT8 in their hardware. Ultimately, the decision for which data type to select when quantizing for inference will often come down to what is best supported in hardware, as well as depending on the model itself.

References

[1] Compute Trends Across Three Eras Of Machine Learning, https://arxiv.org/pdf/2202.05924.pdf [2] 8-bit Numerical Formats for Deep Neural Networks, https://arxiv.org/abs/2206.02915 [3] FP8 Formats for Deep Learning, https://arxiv.org/abs/2209.05433 [4] FP8 Quantization: The Power of the Exponent, https://arxiv.org/pdf/2208.09225.pdf [5] FP8 verses INT8 for Efficient Deep Learning Inference, https://arxiv.org/abs/2303.17951 [6] FP8: Efficient AI Inference Using Custom 8-bit Floating Point Data Types, https://www.untether.ai/content-request-form-fp8-whitepaper

About the Author

Waleed Atallah is a Product Manager responsible for silicon, boards, and systems at Untether AI. Currently, he is rolling out Untether AIs second generation silicon product, the speedAI family of devices. He was previously a Product Manager at Intel, where he was responsible for high-end FPGAs with high bandwidth memory. His interests span all things compute efficiency, particularly the mapping of software to new hardware architectures. He received a B.S. degree in Electrical Engineering from UCLA.

Read more:

The Great 8-bit Debate of Artificial Intelligence - HPCwire

Posted in Artificial Intelligence | Comments Off on The Great 8-bit Debate of Artificial Intelligence – HPCwire

The Role of Artificial Intelligence and Machine Learning in … – Fagen wasanni

Posted: at 7:25 pm

Exploring the Impact of Artificial Intelligence and Machine Learning on Global Supply Chain Optimization

The role of artificial intelligence (AI) and machine learning (ML) in optimizing global supply chains is becoming increasingly significant. As the world becomes more interconnected, the complexity of supply chains grows, making the need for efficient and effective management systems more critical than ever. AI and ML are emerging as powerful tools in this arena, offering transformative solutions that can streamline operations, reduce costs, and improve overall performance.

AI and ML are subsets of computer science that focus on the creation of smart machines capable of learning from experiences and performing tasks that would typically require human intelligence. In the context of supply chain management, these technologies can be leveraged to analyze vast amounts of data, identify patterns, and make predictions, thereby enabling businesses to make more informed decisions.

One of the key areas where AI and ML are making a significant impact is in demand forecasting. Accurate demand forecasting is crucial for supply chain optimization as it helps businesses anticipate customer needs, manage inventory levels, and plan production schedules. Traditional methods of demand forecasting often rely on historical data and can be time-consuming and prone to errors. However, with AI and ML, businesses can analyze a broader range of data, including market trends, consumer behavior, and external factors like weather patterns, to make more accurate predictions.

Another area where AI and ML are proving beneficial is in logistics and transportation. These technologies can be used to optimize routes, reduce fuel consumption, and improve delivery times. For instance, AI algorithms can analyze traffic patterns and suggest the most efficient routes, while machine learning models can predict potential delays due to weather conditions or other disruptions. This not only improves operational efficiency but also enhances customer satisfaction by ensuring timely deliveries.

AI and ML also play a crucial role in risk management. Supply chains are often vulnerable to various risks, including supplier failures, logistical issues, and market fluctuations. By analyzing historical data and current market conditions, AI and ML can predict potential risks and suggest mitigation strategies. This proactive approach to risk management can save businesses significant time and resources.

Moreover, AI and ML can enhance transparency and traceability in supply chains. With the increasing demand for ethical and sustainable practices, businesses are under pressure to provide visibility into their supply chains. AI and ML can help track products from source to consumer, providing real-time information about the products journey. This not only helps businesses comply with regulations but also builds trust with consumers.

In conclusion, the role of AI and ML in optimizing global supply chains is multifaceted and far-reaching. These technologies offer innovative solutions to complex problems, helping businesses improve efficiency, reduce costs, and stay competitive in todays fast-paced market. However, the successful implementation of AI and ML requires a strategic approach, including investment in the right technology, training of personnel, and a culture of continuous learning and adaptation. As we move forward, it is clear that AI and ML will continue to play a pivotal role in shaping the future of global supply chain management.

See the rest here:

The Role of Artificial Intelligence and Machine Learning in ... - Fagen wasanni

Posted in Artificial Intelligence | Comments Off on The Role of Artificial Intelligence and Machine Learning in … – Fagen wasanni

Boston Struggles to Lead in Generative AI as Landscape of Artificial … – Fagen wasanni

Posted: at 7:25 pm

Jeremy Wertheimer, founder of ITA Software, an AI company, discusses the changing landscape of artificial intelligence (AI) and Bostons place in it. ITA Software, which enabled users to compare airline flights and prices online, was acquired by Google for $700 million in 2010. Wertheimer explains that the initial AI program behind ITA, which was about 2 million lines of code, could now be achieved with just 100 or even 10 lines of code due to advances in modern computers.

The shift in AI program design, where simplicity and large amounts of data and computing power are prioritized, has led to a rise in generative AI applications like ChatGPT. However, Boston has not been at the forefront of this industry. According to market tracker CB Insights, out of the 204 recent startups backed by venture capital using generative AI technology, only five are located in Massachusetts. California leads with 112 startups, followed by New York with 37 and Texas with eight. Additionally, none of the 13 unicorn startups worth $1 billion or more are located in Massachusetts; seven are in California.

Wertheimers background at MITs Artificial Intelligence Lab in the 1980s reflects Bostons historical focus on programming complex AI systems using languages like LISP. Wertheimers AI system for finding answers to specific biology research questions in vast amounts of published research was influenced by his time at the lab. While Boston remained invested in this approach until recently, researchers in other regions, particularly New York and Canada, made significant progress in improving machine learning for AI.

ITAs software, powered by a junction tree algorithm, revolutionized the airline industry by enabling more complicated and comprehensive flight searches compared to the older mainframe-based SABRE system. After joining Google in 2010, Wertheimer oversaw the travel division, expanding Googles presence in Cambridge and witnessing the companys rapid growth. Wertheimer now focuses on machine learning for developing new drugs through his latest startup, entering a competitive space in the pharmaceutical industry.

As the landscape of AI continues to evolve, Boston faces challenges in leading the generative AI sector. The citys historical focus on complex AI programming and the rise of machine learning in other regions have contributed to this disparity. However, Wertheimers new startup aims to leverage the strong life sciences ecosystem in the Boston area and compete with other startups and pharmaceutical companies using AI applications.

Go here to read the rest:

Boston Struggles to Lead in Generative AI as Landscape of Artificial ... - Fagen wasanni

Posted in Artificial Intelligence | Comments Off on Boston Struggles to Lead in Generative AI as Landscape of Artificial … – Fagen wasanni

Cleveland City Schools adds artificial intelligence technology at high … – Cleveland Daily Banner

Posted: at 7:24 pm

Cleveland City Schools on Wednesday morning, Aug. 9, announced a new initiative that will transform the learning landscape at Cleveland High and Cleveland Middle schools.

According to a press release, starting from the 2023-2024 academic year, the Career and Technical Education (CTE) program will provide students with hands-on experience in artificial intelligence.

This program has been made possible through a state grant and includes the integration of advanced robots from the company RobotLab, the press release stated.

Artificial Intelligence is changing the landscape of information technology across the globe. We must harness this technology and teach students how to use it effectively while maintaining the integrity of their work, Director of Schools Russell Dyer said. I look forward to seeing how this new program at Cleveland High School responsibly embraces the power of AI technology.

Under the leadership of Cleveland High and Cleveland Middle schools, our administrators are committed to providing an educational environment where students can excel and thrive in the rapidly evolving world, the press release stated.

Supervisor of CTE, Renny Whittenbarger, emphasizes the importance of preparing students to collaborate with AI rather than against it.

"AI technology is here to stay, and it is reshaping various industries," Whittenbarger said. "It is our responsibility to prepare our students with the tools they need to adapt to this dynamic technological landscape. By introducing AI technology through the use of these robots, we aim to empower our students to work alongside AI as valuable assets and become future leaders in the techdriven world."

CHS Principal Bob Pritchard said, "Embracing artificial intelligence is essential in preparing our students for the future AI job market. It will empower them with invaluable skills and knowledge to excel in their careers and tackle the challenges and opportunities presented by AI-driven industries."

CMS Principal Nat Akiona also noted, We are thrilled to have AI technology introduced into our middle school curriculum at Cleveland Middle School. Providing our students with the opportunity to engage with such advanced technology at their age is truly exciting.

The grant has enabled the acquisition of a pair of 4-foot robots and six 2-foot robots from the company RobotLab.

These state-of-the-art robots will provide students with an immersive learning experience, giving them the opportunity to explore AI technology in a practical and interactive manner, the press release stated. By instilling skills in critical thinking, problem-solving, and AI applications, Cleveland City Schools is taking a significant step towards creating a future-ready generation of graduates.

Read more from the original source:

Cleveland City Schools adds artificial intelligence technology at high ... - Cleveland Daily Banner

Posted in Artificial Intelligence | Comments Off on Cleveland City Schools adds artificial intelligence technology at high … – Cleveland Daily Banner

Page 9«..891011..2030..»