Daily Archives: February 24, 2024

The Canadian Vaping Association calls on the federal government to maintain science-based vaping regulations – GlobeNewswire

Posted: February 24, 2024 at 12:05 pm

BEAMSVILLE, Ontario, Feb. 23, 2024 (GLOBE NEWSWIRE) -- The Canadian Vaping Association (CVA) is dedicated to promoting tobacco harm reduction (THR) strategies for adults while endorsing policies that safeguard youth from nicotine addiction and exposure. Global experts, including some who have testified in court, argue that certain measures proposed by health organizations, like flavour bans and high taxes, actually hinder harm reduction efforts and fail to reduce vaping experimentation among young people.

Contrary to many claims, a blanket ban on flavoured vaping products is a harmful approach to public health. Research finds that flavours play a crucial role in the adoption of vaping by adult smokers and that using a flavoured product to quit smoking significantly increases the likelihood of a successful quit attempt. Moreover, mounting evidence suggests that banning flavours leads to an increase in smoking among both adults and youth. Rather than imposing flavour bans, the CVA supports, based on the strongest evidence, the enforcement of regulations that protect young people while also promoting harm reduction for adults. This includes strict age-verification processes, extensive youth prevention initiatives, and rigorous enforcement of existing laws that already ban the sale and marketing of vaping products to minors.

Canadian Tobacco and Nicotine Survey (CTNS) data is clear that while in hypothetical surveys may appear to reduce youth usage, restrictive policies like flavour bans and taxation have yet to be effective in real world applications. On average, provinces that have implemented flavour bans exhibit the highest rates of youth usage. Conversely, provinces like Ontario and Alberta with balanced regulation have the lowest rates of youth vaping in Canada.

This is likely because flavours have not been found to be a primary driver for youth experimentation. Though young people may prefer flavours, as do adults, according to the 2021 CTNS, the leading reason reported for why those aged 15-19 vape was to reduce stress. Youth also reported vaping because they enjoy it, curiosity, and other reasons.

The prevalence of stress relief through vaping among youth, is a recurring theme in various youth usage surveys. Most commonly, young people cite depression, anxiety, or mental health as the primary reasons for experimenting with vaping. Acknowledging this is crucial because proponents of flavour bans and other restrictive measures frequently overlook this data, opting instead for simplistic and ineffective regulations that fail to address the root cause of the issue.

Additionally, health organizations have come together to propose nicotine pouches be restricted to prescription-only access, overlooking the tangible benefits of these products in harm reduction strategies. Rather than restricting access, the CVA supports measures that ensure responsible marketing and appropriate age restrictions, in line with established tobacco control principles. These policies have been found to achieve the lowest rates of youth experimentation while supporting adults who smoke in their transition to a far less harmful product.

Before adopting any further NGO policy recommendations, its essential to review the outcomes of such policies. The results from provinces that have enacted flavour bans clearly show a discrepancy between the intended policy goals and the actual outcomes.

If Canada is to achieve its goal of being smoke-free by 2035, adults who smoke need to be aware of all quit options. Flavour bans weaken the efficacy of these products and slow our progress in achieving a smoke-free society. The CVA calls on Minister Holland to convene a roundtable of leading experts, akin to the Cannabis review, to ensure future regulations are grounded in scientific evidence, said Darryl Tempest, Government Relations Counsel to the CVA Board.

The CVA urges policymakers to consider evidence-based approaches that prioritize both youth protection and adult harm reduction, rather than resorting to reactionary measures that hinder Canadas goal of becoming smoke-free by 2035.

About the CVA: The Canadian Vaping Association (CVA) is a registered national, not-for-profit organization, established as the voice for the Canadian vaping industry. The CVA represents over 200 vaping businesses in Canada, and receives no funding from tobacco companies or affiliates. The primary goal of CVA is to ensure that government regulation is reasonable and practical, through the strategy of proactive communication.

Darryl Tempest Government Relations Counsel to the CVA Board dtempest@thecva.org 647-274-1867

Originally posted here:

The Canadian Vaping Association calls on the federal government to maintain science-based vaping regulations - GlobeNewswire

Posted in Vaping | Comments Off on The Canadian Vaping Association calls on the federal government to maintain science-based vaping regulations – GlobeNewswire

New study adds more smoke to the vaping debate – Cosmos

Posted: at 12:04 pm

By Dr Joe Milton the Australian Science Media Centre

Smokers undergoing counselling to quit smoking are more likely to succeed if nicotine vapes are part of the strategy, according to an international study publishedin the New England Journal of Medicinethis week.

As Australia gears up to make vaping prescription-only from March, fierce debate has raged over whether vapes are a menace creating a new generation of nicotine addicts, or a lifesaver for smokers who are trying to quit.

The government is hoping the changes to the law will make it harder for kids to get hold of vapes, while theyll remain available as a cessation aid for adult smokers via their doctors.

The new study suggests vapes may have an important role to play in helping people get off the smokes.

The researchers recruited 1,246 smokers, 622 of whom received counselling along with free e-cigarettes and e-liquids. The other 624 underwent counselling but were given a voucher to spend on anything they liked, instead of the free e-cigarettes.

Six months on, around three in five smokers in the vaping group had stayed off the smokes in the week before their check-up, compared to around two in five among the other group.

Dr Colin Mendelsohn, a retired academic, researcher, and smoking cessation clinician, says the study was large and well-conducted and that the results support the use of vaping nicotine as an effective quit-smoking aid.

After six months, 28.9% of smokers in the intervention group were continuously abstinent from the quit date compared to 16.3% in the control group, he said. And, perhaps surprisingly, respiratory symptoms improved in the intervention group to a larger extent than for subjects in the control [non-vaping] group, he added.

So far, so promising, but the study did not look at how vaping compared to other available smoking cessation methods, including nicotine replacement therapy, saidAssociate Professor Michelle Jongenelis fromThe University of Melbourne.

And vapes should not be considered completely harmless as e-liquids can contain potentially damaging chemicals, she added.

(For a wider discussion and some dissenting views listen to Cosmos new podcast series Debunks: Vaping below)

When it came to who had kicked the nicotine habit altogether, the news was also not so good for the vapers. Only around one in five people in the vaping group had given up all nicotine products completely, compared to one in three for the other group, suggesting many of those who gave upsmoking tobacco continued using e-cigarettes.

It is critical that those who use e-cigarettes to quit smoking are then supported to quit the use of e-cigarettes ongoing use is not recommended,saidAssociate Professor Jongenelis

But DrMendelsohn said the study suggests vaping nicotine is an effective quitting aid with a good safety profile.

Australian doctors should feel more confident in prescribing vaping products for their smoking patients, especially those unable to quit with other methods.

You can read the EXPERT REACTION here

Original post:

New study adds more smoke to the vaping debate - Cosmos

Posted in Vaping | Comments Off on New study adds more smoke to the vaping debate – Cosmos

Disposable vapes to be banned in Scotland under new legislation – Yahoo News UK

Posted: at 12:04 pm

disposable vape (Image: PA)

Disposable vapes will be banned next year under new legislation proposed today.

The Scottish Government has put forward draft legislation to bring in the ban on sale and supply of single use vapes from April 2025.

The ban was recommended following a UK wide consultation Creating a Smokefree Generation and Tackling Youth Vaping last year.

READ NEXT:Famous restaurant boss on 50 years in Glasgow and Chinese New Year

It needs separate legislation in each of the four UK nations.

The draft legislation is open for consultation until March 8.

Lorna Slater, Circular Economy Minister, said: Legislating to ban the sale and supply of single-use vapes fulfils a Programme for Government commitment to reduce vaping among non-smokers and young people and take action to tackle their environmental impact.

The public consultation demonstrated that there is strong support for tougher action on vaping. From causing fires in waste facilities to more than 26 million disposable vapes being consumed and thrown away in Scotland in the past year, single-use vapes are a threat to our environment as well as to our public health.

These proposed changes to the law demonstrate our absolute commitment to further improve the wellbeing of communities and protecting our beautiful natural environment.

Read more:

Disposable vapes to be banned in Scotland under new legislation - Yahoo News UK

Posted in Vaping | Comments Off on Disposable vapes to be banned in Scotland under new legislation – Yahoo News UK

Iowa Is One Of The States Working Hardest To Quit Vaping – B100

Posted: at 12:04 pm

Vaping is a dangerous habit and Iowans really want to quit.

Vaping is a super popular way for the youth nowadays to "smoke". The National Institute on Drug Abusedefines vaping (or e-cigarettes) are battery-operated devices that use aerosols that often contain nicotine. I've seen vapes look more like flashdrives or pens, but they can also look like normal cigarettes, cigars, or pipes. One of the more dangerous aspects of vapes, especially for teens, is that vapes often are flavored with sweet tastes, making them more appealing.

But according tothe Des Moines Register, many Iowans are trying to quit vaping.

It's a good goal to have.

Statistics show that nationally, over 25% of high school seniors said they have vaped, and in Iowa, using vapes more than doubled between 2016 and 2018, as 22.4% of high school juniors reported vaping.And it's only getting worse. A whopping 50% of students at a large Iowa City high school said they've vaped in a 2019 study.

As for adults, almost 7% say that they've used and are still using e-cigs.

The good news is that Iowa is really trying to quit, according to what we're Googling.

SnusBoss, a company that sells nicotine pouches, did a study to find that Iowa ranks 19th in the nation for trying to quit vaping. We're looking up 'quit vaping', 'stop vaping', 'popcorn lung' (a lung disease that can come from vaping), and 'vaping side effects'.

If you're trying to quit vaping, there are several resources you can reach out to, including Quitline Iowa. You can work with a personal coach to help you quit tobacco and have their support.

Gallery Credit: Stacker

Gallery Credit: Canva

Read more:

Iowa Is One Of The States Working Hardest To Quit Vaping - B100

Posted in Vaping | Comments Off on Iowa Is One Of The States Working Hardest To Quit Vaping – B100

SGF advises caution on illicit trade after disposable vape ban – Talking Retail

Posted: at 12:04 pm

The trade association for Scottish convenience stores also warned that a ban will make it more difficult for people who wish to quit smoking to access alternative nicotine products, potentially encouraging some people to revert to smoking tobacco.

SGF chief executive Pete Cheema said: NHS England has made it clear that nicotine vaping products are one of the most successful cessation aids available. At the moment, they are legally accessible and affordable for adults who wish to quit smoking, but that wont be the case after 1 April next year.

SGF wants to see tighter regulation of these products. They should not be targeted at younger people and should only be sold by legitimate traders who take their responsibilities seriously.

Those found in breach of the rules should feel the full force of the law.

However, there is already a significant illicit market for disposable vapes in the UK, including potentially unsafe products. That will only get worse after a ban.

The Scottish government, and the UK government, need to be clear about how they intend to tackle these problems, which are undoubtedly now on the horizon.

The draft regulations do not make it clear how they intend to solve the problem of increasing illicit trade, and that needs to be a priority.

Likewise, it is critical they do not over-regulate flavouring, which is proven to be the key driver for smokers switching if they wish to.

This morning, the Scottish government announced plans to ban the sale of disposable vapes in Scotland, to be implemented from 1 April 2025.

This will form part of a UK-wide ban on the product that will likely come into force across the UK on a similar timetable.

The Association of Convenience Stores (ACS) has also voiced its concerns that the ban on disposables will fuel the illegal trade.

See the rest here:

SGF advises caution on illicit trade after disposable vape ban - Talking Retail

Posted in Vaping | Comments Off on SGF advises caution on illicit trade after disposable vape ban – Talking Retail

Google DeepMind C.E.O. Demis Hassabis on the Path From Chatbots to A.G.I. – The New York Times

Posted: at 12:01 pm

Listen and follow Hard Fork Apple | Spotify | Amazon | YouTube

This weeks episode is a conversation with Demis Hassabis, the head of Googles artificial intelligence division. We talk about Googles latest A.I. models, Gemini and Gemma; the existential risks of artificial intelligence; his timelines for artificial general intelligence; and what he thinks the world will look like post-A.G.I.

Additional listening and reading:

Hard Fork is hosted by Kevin Roose and Casey Newton and produced by Davis Land and Rachel Cohn . The show is edited by Jen Poyant. Engineering by Chris Wood and original music by Dan Powell, Marion Lozano and Pat McCusker. Fact-checking by Caitlin Love .

Special thanks to Paula Szuchman, Pui-Wing Tam, Nell Gallogly, Kate LoPresti and Jeffrey Miranda .

See the article here:

Google DeepMind C.E.O. Demis Hassabis on the Path From Chatbots to A.G.I. - The New York Times

Posted in Artificial General Intelligence | Comments Off on Google DeepMind C.E.O. Demis Hassabis on the Path From Chatbots to A.G.I. – The New York Times

Bill Foster, a particle physicist-turned-congressman, on why he’s worried about artificial general intelligence – FedScoop

Posted: at 12:01 pm

Congress is just starting to ramp up its efforts to regulate artificial intelligence, but one member says he first encountered the technology in the 1990s, when he used neural networks to study physics. Now, Rep. Bill Foster, D-Ill., is returning to AI as a member of the new bipartisan task force on artificial intelligence, led by Reps. Ted Lieu, D-Calif., and Jay Obernolte, R-Calif., which was announced by House leadership earlier this week.

In a chat with FedScoop, the congressman outlined his concerns with artificial intelligence. The threat of deepfakes, he warned, cant necessarily be solved with detection and may require some kind of digital authentication platform. At the same time, Foster said hes also worried that the setup of committees and the varying levels of expertise within Congress arent well situated to deal with the technology.

There are many members of Congress who understand about finance and banking and can push back on technical statements about financial services that might not be true, he told FedScoop. Its much harder for the average member of Congress to push back on claims about AI. Thats the difference. Were not as well defended against statements that may or may not be factual from lobbying organizations.

Compared to some other members of Congress, Foster appears particularly concerned about artificial general intelligence, a theoretical form of AI that, some argue, could end up rivaling human abilities. This technology doesnt exist yet, but some executives, including OpenAI CEO Sam Altman, have warned that this type of AI could raise massive safety issues. In particular, Foster argues that there will be a survival advantage to algorithmic systems that are opaque and deceptive.

(Critics, meanwhile, argue that discussion of AGI has distracted from opportunities to address the risks of AI systems that already exist today, like bias issues raised by facial recognition software.)

Fosters comments come in the nascent days of the AI task force, but help elucidate how varied perspectives on artificial intelligence are, even within the Democratic party. Unlike other areas, the technology is still relatively new to Congress and positions on how to rein in AI, and potential partisan divides, are only still forming.

Editors note: The transcript has been edited for clarity and length.

FedScoop: With this new AI task force, to what extent do you think youre going to be focusing on chips and focusing on hardware, given both the recent chips legislation and OpenAIs Sam Altmans calls for more focus on chip infrastructure, too?

Rep. Bill Foster: Its an interesting tradeoff. I doubt that this committee is going to be in a position to micromanage the [integrated circuit] industry. I first met Sam Altman about six years ago when I visited OpenAI [to talk about] universal basic income, which is one of the things that a lot of people point to having to do with the disruption to the labor market that [AI] is likely to cost.

When I started making noise about this inside the caucus, people expected the first jobs to fall would be factory assembly line workers, long haul truck drivers, taxi drivers. Thats taken longer than people guessed right then. But the other thing thats happened thats surprised people is how quickly the creative arts have come under assault from AI. Theres a lot of nervousness among teachers about what exactly are the careers of the future that were actually training people for.

I think one of the most important responses something that the government can actually deliver and even deliver this session of Congress is to provide people some way of defending themselves against deepfakes. Theres two approaches to this. The first thing is to try to imagine that you can detect fraudulent media and to develop software to detect deepfake material. Im not optimistic that thats going to work. Its going to be a cat-and-mouse game forever. Another approach is to provide citizens with a means of proving they are who they say they are online and they are not a deepfake.

FS: An authentication service?

BF: A mobile ID. A digital drivers license or a secure digital identity. This is a way for someone to use their cell phone and their government-provided credential, like a passport or Real ID-compliant drivers license, and associate it with your cell phones [This could] take advantage of your cell phones ability through AI to recognize its owner and also the modern cell phones ability to be used like a security dongle. It has whats called a secure enclave, or a secure compute facility, that allows it to hold private encryption keys that makes the device essentially a unique device in the world that can be associated with a unique person and their credential.

FS: How optimistic are you that this new AI task force is actually going to produce legislation?

BF: One reason Im optimistic is the Republicans choice of a chair: Jay Obernolte. Hes another guy who keeps up the effort to maintain his technical currency. He and I can geek out about the actual state of the art, which is rather rare in the U.S. Congress. One of the missions, certainly for this task force, will be to try to educate members about at least the capabilities of AI.

FS: How worried are you that companies might try to influence what legislation is crafted to sort of benefit their own finances?

BF: I served on the Financial Services Committee for all my time in Congress, so Im very familiar with industry trying to influence policy. It would shock me if that didnt happen. One of the dangers here is that there are many members of Congress who understand about finance and banking and can push back on technical statements about financial services that might not be true. Its much harder for the average member of Congress to push back on claims about AI. Thats the difference. Were not as well defended against statements that may or may not be factual from lobbying organizations.

FS: To what extent should the government itself be trying to build its own models or creating data sources for training those models?

BF: There is a real role for the national labs in curating datasets. This is already done at Argonne National Lab and others. For example, with datasets where privacy is a concern, like electronic medical records where you really need to analyze them, but you need a gatekeeper on privacy thats something where a national laboratory that deals with very high-security data has the right culture to protect that. Even when theyre not developing the algorithms, they can allow third parties to come in and apply those algorithms for the datasets and give them the results without turning over all the private information.

FS: Youve proposed legislation related to technology modernization and Congress. To what extent are members exposed to ChatGPT and similar technologies?

BF: The first response is to have Congress organize itself in a way that reflects todays economy. Information technology just passed financial services as a fraction of the economy. That puts it pretty much on a par with, for example, health care, which is also a little under 20%. If you look at the structure of Congress, it looks like a snapshot of our economy 100 years ago.

The AI disruption might be an opportunity for Congress to organize itself to match the modern economy. Thats one of the big issues that Id say. Obviously, thats the work of a decade at least. Theres going to be a number of economic responses to the disruption of the workforce. I think the thing we just have to understand and appreciate [is] that were all in this together. It used to be 10 or 15 years ago that people say, those poor, long-haul truck drivers or taxi drivers or factory workers that lose their jobs. But no, its everybody. With that realization, it will be easier to get a consensus that weve got to expand the safety net for those who have seen their skills and everything that defines their identity and their economic productivity put at risk from AI.

FS: How worried are you about artificial general intelligence?

BF: Over the last five years, Ive become much more worried than I previously was. And the reason for that is theres this analogy between the evolution of AI algorithms and the evolution in living organisms. And what if you look at living organisms and the strategies that have evolved, many of them are deceptive.

This happens in the natural kingdom. It will also happen and its already happening in the evolution of artificial intelligence. If you imagine there are two AI algorithms: one of them is completely transparent and you understand how it thinks [and] the other one is a black box. Then you ask yourself, which of those is more likely to be shut down and the research abandoned on it? The answer is it is the transparent one that is more likely to be shut down, because you will see it, you will understand that [it has] evil thought processes and stop working on it. There will be a survival advantage to being opaque.

You are already seeing in some of these large language models behavior that looks like deceptive behavior. Certainly to the extent that it just models whats on the internet, there will be lots of deceptive behavior, documented on the internet, for it to model and to try out in its behavior. It will be a huge survival advantage for AI algorithms to be deceptive. Its similar to the whole scandal with Volkswagen and the smog emission software. When you have opaque algorithms, the companies might not even know that their algorithm is behaving this way. Because they will put it under observation, they will test it. The difficulty is that [theyre going to] start knowing theyre under observation and then behave very nicely, and theyll do everything that you wish they would. Then, when its out in the wild, they will just try to be as profitable as they can for their company. Those are the algorithms that will survive and displace other algorithms.

More here:

Bill Foster, a particle physicist-turned-congressman, on why he's worried about artificial general intelligence - FedScoop

Posted in Artificial General Intelligence | Comments Off on Bill Foster, a particle physicist-turned-congressman, on why he’s worried about artificial general intelligence – FedScoop

Generative AI Defined: How It Works, Benefits and Dangers – TechRepublic

Posted: at 12:01 pm

What is generative AI in simple terms?

Generative AI is a type of artificial intelligence technology that broadly describes machine learning systems capable of generating text, images, code or other types of content, often in response to a prompt entered by a user.

Generative AI models are increasingly being incorporated into online tools and chatbots that allow users to type questions or instructions into an input field, upon which the AI model will generate a human-like response.

DOWNLOAD: This generative AI guide from TechRepublic Premium.

Generative AI uses a computing process known as deep learning to analyze patterns in large sets of data and then replicates this to create new data that appears human-generated. It does this by employing neural networks, a type of machine learning process that is loosely inspired by the way the human brain processes, interprets and learns from information over time.

To give an example, if you were to feed lots of fiction writing into a generative AI model, it would eventually gain the ability to craft stories or story elements based on the literature its been trained on. This is because the machine learning algorithms that power generative AI models learn from the information theyre fed in the case of fiction, this would include elements like plot structure, characters, themes and other narrative devices.

Generative AI models get more sophisticated over time the more data a model is trained on and generates, the more convincing and human-like its outputs become.

The popularity of generative AI has exploded in recent years, largely thanks to the arrival of OpenAIs ChatGPT and DALL-E models, which put accessible AI tools into the hands of consumers.

Since then, big tech companies including Google, Microsoft, Amazon and Meta have launched their own generative AI tools to capitalize on the technologys rapid uptake.

Various generative AI tools now exist, although text and image generation models are arguably the most well-known. Generative AI models typically rely on a user feeding a prompt into the engine that guides it towards producing some sort of desired output, be it text, an image, a video or a piece of music, though this isnt always the case.

Examples of generative AI models include:

Various types of generative AI models exist, each designed for specific tasks and purposes. These can broadly be categorized into the following types.

Transformer-based models are trained on large sets of data to understand the relationships between sequential information like words and sentences. Underpinned by deep learning, transformer-based models tend to be adept at natural language processing and understanding the structure and context of language, making them well suited for text-generation tasks. ChatGPT-3 and Google Gemini are examples of transformer-based generative AI models.

Generative adversarial networks are made up of two neural networks known as a generator and a discriminator, which essentially work against each other to create authentic-looking data. As the name implies, the generators role is to generate convincing output, such as an image based on a prompt, while the discriminator works to evaluate the authenticity of said image. Over time, each component gets better at their respective roles, resulting in more convincing outputs. DALL-E and Midjourney are examples of GAN-based generative AI models.

Variational autoencoders leverage two networks to interpret and generate data in this case, an encoder and a decoder. The encoder takes the input data and compresses it into a simplified format. The decoder then takes this compressed information and reconstructs it into something new that resembles the original data but isnt entirely the same.

One example might be teaching a computer program to generate human faces using photos as training data. Over time, the program learns how to simplify the photos of peoples faces into a few important characteristics such as the size and shape of the eyes, nose, mouth, ears and so on and then use these to create new faces.

This type of VAE might be used to, say, increase the diversity and accuracy of facial recognition systems. By using VAEs to generate new faces, facial recognition systems can be trained to recognize more diverse facial features, including those that are less common.

Multimodal models can understand and process multiple types of data simultaneously, such as text, images and audio, allowing them to create more sophisticated outputs. An example might be an AI model capable of generating an image based on a text prompt, as well as a text description of an image prompt. DALL-E 3 and OpenAIs GPT-4 are examples of multimodal models.

ChatGPT is an AI chatbot developed by OpenAI. Its a large language model that uses transformer architecture specifically, the generative pretrained transformer, hence GPT to understand and generate human-like text.

You can learn everything you need to know about ChatGPT in this TechRepublic cheat sheet.

Google Gemini (previously Bard) is another example of an LLM based on transformer architecture. Similar to ChatGPT, Gemini is a generative AI chatbot that generates responses to user prompts.

Google launched Bard in the U.S. in March 2023 in response to OpenAIs ChatGPT and Microsofts Copilot AI tool. It was launched in Europe and Brazil later that year.

Learn more about Gemini by reading TechRepublics comprehensive Google Gemini cheat sheet.

SEE: Google Gemini vs. ChatGPT: Is Gemini Better Than ChatGPT? (TechRepublic)

For businesses, efficiency is arguably the most compelling benefit of generative AI because it can help automate specific tasks and focus employees time, energy and resources on more important strategic objectives. This can result in lower labor costs, greater operational efficiency and insights into how well certain business processes are or are not performing.

For professionals and content creators, generative AI tools can help with idea creation, content planning and scheduling, search engine optimization, marketing, audience engagement, research and editing, and potentially more. Again, the key proposed advantage is efficiency, because generative AI tools can help users reduce the time they spend on certain tasks and invest their energy elsewhere. That said, manual oversight and scrutiny of generative AI models remains highly important; we explain why later in this article.

McKinsey estimates that, by 2030, activities that currently account for around 30% of U.S. work hours could be automated, prompted by the acceleration of generative AI.

SEE: Indeeds 10 Highest-Paid Tech Skills: Generative AI Tops the List

Generative AI has found a foothold in a number of industry sectors and is now popular in both commercial and consumer markets. The use of generative AI varies from industry to industry and is more established in some than in others. Current and proposed use cases include the following:

In terms of role-specific use cases of generative AI, some examples include:

A major concern around the use of generative AI tools and particularly those accessible to the public is their potential for spreading misinformation and harmful content. The impact of doing so can be wide-ranging and severe, from perpetuating stereotypes, hate speech and harmful ideologies to damaging personal and professional reputation.

SEE: Gartner analysts take on 5 ways generative AI will impact culture & society

The risk of legal and financial repercussions from the misuse of generative AI is also very real; indeed, it has been suggested that generative AI could put national security at risk if used improperly or irresponsibly.

These risks havent escaped policymakers. On Feb. 13, 2024, the European Council approved the AI Act, a first-of-kind piece of legislation designed to regulate the use of AI in Europe. The legislation takes a risk-based approach to regulating AI, with some AI systems banned outright.

Security agencies have made moves to ensure AI systems are built with safety and security in mind. In November 2023, 16 agencies including the U.K.s National Cyber Security Centre and the U.S. Cybersecurity and Infrastructure Security Agency released the Guidelines for Secure AI System Development, which promote security as a fundamental aspect of AI development and deployment.

Generative AI has prompted workforce concerns, most notably that the automation of tasks could lead to job losses. Research from McKinsey suggests that, by 2030, around 12 million people may need to switch jobs, with office support, customer service and food service roles most at risk. The consulting firm predicts that clerks will see a decrease of 1.6 million jobs, in addition to losses of 830,000 for retail salespersons, 710,000 for administrative assistants and 630,000 for cashiers.

SEE: OpenAI, Google and More Agree to White House List of Eight AI Safety Assurances

Generative AI and general AI represent different sides of the same coin; both relate to the field of artificial intelligence, but the former is a subtype of the latter.

Generative AI uses various machine learning techniques, such as GANs, VAEs or LLMs, to generate new content from patterns learned from training data.

General AI, also known as artificial general intelligence, broadly refers to the concept of computer systems and robotics that possess human-like intelligence and autonomy. This is still the stuff of science fiction think Disney Pixars WALL-E, Sonny from 2004s I, Robot or HAL 9000, the malevolent AI from 2001: A Space Odyssey. Most current AI systems are examples of narrow AI, in that theyre designed for very specific tasks.

To learn more about what artificial intelligence is and isnt, read our comprehensive AI cheat sheet.

Generative AI is a subfield of artificial intelligence; broadly, AI refers to the concept of computers capable of performing tasks that would otherwise require human intelligence, such as decision making and NLP. Generative AI models use machine learning techniques to process and generate data.

Machine learning is the foundational component of AI and refers to the application of computer algorithms to data for the purposes of teaching a computer to perform a specific task. Machine learning is the process that enables AI systems to make informed decisions or predictions based on the patterns they have learned.

DOWNLOAD: TechRepublic Premiums prompt engineer hiring kit

What is the difference between generative AI and discriminative AI?

Whereas generative AI is used for generating new content by learning from existing data, discriminative AI specializes in classifying or categorizing data into predefined groups or classes.

Discriminative AI works by learning how to tell different types of data apart. Its used for tasks where data needs to be sorted into groups; for example, figuring out if an email is spam, recognizing whats in a picture or diagnosing diseases from medical images. It looks at data it already knows to classify new data correctly.

So, while generative AI is designed to create original content or data, discriminative AI is used for analyzing and sorting it, making each useful for different applications.

Regenerative AI, while less commonly discussed, refers to AI systems that can fix themselves or improve over time without human help. The concept of regenerative AI is centered around building AI systems that can last longer and work more efficiently, potentially even helping the environment by making smarter decisions that result in less waste.

In this way, generative AI and regenerative AI serve different roles: Generative AI for creativity and originality, and regenerative AI for durability and sustainability within AI systems.

It certainly looks as though generative AI will play a huge role in the future. As more businesses embrace digitization and automation, generative AI looks set to play a central role in industries of all types, with many organizations already establishing guidelines for the acceptable use of AI in the workplace. The capabilities of gen AI have already proven valuable in areas such as content creation, software development, medicine, productivity, business transformation and much more. As the technology continues to evolve, gen AIs applications and use cases will only continue to grow.

SEE: Deloittes 2024 Tech Predictions: Gen AI Will Continue to Shape Chips Market

That said, the impact of generative AI on businesses, individuals and society as a whole is contingent on properly addressing and mitigating its risks. Key to this is ensuring AI is used ethically by reducing biases, enhancing transparency and accountability and upholding proper data governance.

None of this will be straightforward. Keeping laws up to date with fast-moving tech is tough but necessary, and finding the right mix of automation and human involvement will be key to democratizing the benefits of generative AI. Recent legislation such as President Bidens Executive Order on AI, Europes AI Act and the U.K.s Artificial Intelligence Bill suggest that governments around the world understand the importance of getting on top of these issues quickly.

Here is the original post:

Generative AI Defined: How It Works, Benefits and Dangers - TechRepublic

Posted in Artificial General Intelligence | Comments Off on Generative AI Defined: How It Works, Benefits and Dangers – TechRepublic

AI and You: OpenAI’s Sora Previews Text-to-Video Future, First Ivy League AI Degree – CNET

Posted: at 12:01 pm

AI developments are happening pretty fast. If you don't stop and look around once in a while, you could miss them.

Fortunately, I'm looking around for you and what I saw this week is that competition between OpenAI, maker of ChatGPT and Dall-E, and Google continues to heat up in a way that's worth paying attention to.

A week after updating its Bard chatbot and changing the name to Gemini, Google's DeepMind AI subsidiary previewed the next version of its generative AI chatbot. DeepMind told CNET's Lisa Lacy that Gemini 1.5 will be rolled out "slowly" to regular people who sign up for a wait list and will be available now only to developers and enterprise customers.

Gemini 1.5 Pro, Lacy reports, is "as capable as" the Gemini 1.0 Ultra model, which Google announced on Feb. 8. The 1.5 Pro model has a win rate -- a measurement of how many benchmarks it can outperform -- of 87% compared with the 1.0 Pro and 55% against the 1.0 Ultra. So the 1.5 Pro is essentially an upgraded version of the best available model now.

Gemini 1.5 Pro can ingest video, images, audio and text to answer questions, added Lacy. Oriol Vinyals, vice president of research at Google DeepMind and co-lead of Gemini, described 1.5 as a "research release" and said the model is "very efficient" thanks to a unique architecture that can answer questions by zeroing in on expert sources in that particular subject rather than seeking the answer from all possible sources.

Meanwhile, OpenAI announced a new text-to-video model called Sora that's capturing a lot of attention because of the photorealistic videos it's able to generate. Sora can "create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions." Following up on a promise it made, along with Google and Meta last week, to watermark AI-generated images and video, OpenAI says it's also creating tools to detect videos created with Sora so they can be identified as being AI generated.

Google and Meta have also announced their own gen AI text-to-video creators.

Sora, which means "sky" in Japanese, is also being called experimental, with OpenAI limiting access for now to so-called "red teamers," security experts and researchers who will assess the tool's potential harms or risks. That follows through on promises made as part of President Joe Biden's AI executive order last year, asking developers to submit the results of safety checks on their gen AI chatbots before releasing them publicly. OpenAI said it's also looking to get feedback on Sora from some visual artists, designers and filmmakers.

How do the photorealistic videos look? Pretty realistic. I agree with the The New York Times, which described the short demo videos -- "of wooly mammoths trotting through a snowy meadow, a monster gazing at a melting candle and a Tokyo street scene seemingly shot by a camera swooping across the city" -- as "eye popping."

The MIT Review, which also got a preview of Sora, said the "tech has pushed the envelope of what's possible with text-to-video generation." Meanwhile, The Washington Post noted Sora could exacerbate an already growing problem with video deepfakes, which have been used to "deceive voters" and scam consumers.

One X commentator summarized it this way: "Oh boy here we go what is real anymore." And OpenAI CEO Sam Altman called the news about its video generation model a "remarkable moment."

You can see the four examples of what Sora can produce on OpenAI's intro site, which notes that the tool is "able to generate complex scenes with multiple characters, specific types of motion, and accurate details of the subject and background. The model understands not only what the user has asked for in the prompt, but also how those things exist in the physical world. The model has a deep understanding of language, enabling it to accurately interpret prompts and generate compelling characters that express vibrant emotions."

But Sora has its weaknesses, which is why OpenAI hasn't yet said whether it will actually be incorporated into its chatbots. Sora "may struggle with accurately simulating the physics of a complex scene and may not understand specific instances of cause and effect. For example, a person might take a bite out of a cookie, but afterward, the cookie may not have a bite mark. The model may also confuse spatial details of a prompt, for example, mixing up left and right."

All of this is to remind us that tech is a tool -- and that it's up to us humans to decide how, when, where and why to use that technology. In case you didn't see it, the trailer for the new Minions movie (Despicable Me 4: Minion Intelligence) makes this point cleverly, with its sendup of gen AI deepfakes and Jon Hamm's voiceover of how "artificial intelligence is changing how we see the worldtransforming the way we do business."

"With artificial intelligence," Hamm adds over the minions' laughter, "the future is in good hands."

Here are the other doings in AI worth your attention.

Twenty tech companies, including Adobe, Amazon, Anthropic, ElevenLabs, Google, IBM, Meta, Microsoft, OpenAI, Snap, TikTok and X, agreed at a security conference in Munich that they will voluntarily adopt "reasonable precautions" to guard against AI tools being used to mislead or deceive voters ahead of elections.

"The intentional and undisclosed generation and distribution of Deceptive AI Election content can deceive the public in ways that jeopardize the integrity of electoral processes," the text of the accord says, according to NPR. "We affirm that the protection of electoral integrity and public trust is a shared responsibility and a common good that transcends partisan interests and national borders."

But the agreement is "largely symbolic," the Associated Press reported, noting that "reasonable precautions" is a little vague.

"The companies aren't committing to ban or remove deepfakes," the AP said. "Instead, the accord outlines methods they will use to try to detect and label deceptive AI content when it is created or distributed on their platforms. It notes the companies will share best practices with each other and provide 'swift and proportionate responses' when that content starts to spread."

AI has already been used to try to trick voters in the US and abroad. Days before the New Hampshire presidential primary, fraudsters sent an AI robocall that mimicked President Biden's voice, asking them not to vote in the primary. That prompted the Federal Communications Commission this month to make AI-generated robocalls illegal. The AP said that "Just days before Slovakia's elections in November, AI-generated audio recordings impersonated a candidate discussing plans to raise beer prices and rig the election. Fact-checkers scrambled to identify them as false as they spread across social media."

"Everybody recognizes that no one tech company, no one government, no one civil society organization is able to deal with the advent of this technology and its possible nefarious use on their own," Nick Clegg, president of global affairs for Meta, told the Associated Press in an interview before the summit.

Over 4 billion people are set to vote in key elections this year in more than 40 countries,. including the US, The Hill reported.

If you're concerned about how deepfakes may be used to scam you or your family members -- someone calls your grandfather and asks them for money by pretending to be you -- Bloomberg reporter Rachel Metz has a good idea. She suggests it may be time for us all to create a "family password" or safe word or phrase to share with our family or personal network that we can ask for to make sure we're talking to who we think we're talking to.

"Extortion has never been easier," Metz reports. "The kind of fakery that used to take time, money and technical know-how can now be accomplished quickly and cheaply by nearly anyone."

That's where family passwords come in, since they're "simple and free," Metz said. "Pick a word that you and your family (or another trusted group) can easily remember. Then, if one of those people reaches out in a way that seems a bit odd -- say, they're suddenly asking you to deliver 5,000 gold bars to a P.O. Box in Alaska -- first ask them what the password is."

How do you pick a good password? She offers a few suggestions, including using a word you don't say frequently and that's not likely to come up in casual conversations. Also, "avoid making the password the name of a pet, as those are easily guessable."

Hiring experts have told me it's going to take years to build an AI-educated workforce, considering that gen AI tools like ChatGPT weren't released until late 2022. So it makes sense that learning platforms like Coursera, Udemy, Udacity, Khan Academy and many universities are offering online courses and certificates to upskill today's workers. Now the University of Pennsylvania's School of Engineering and Applied Science said it's the first Ivy League school to offer an undergraduate major in AI.

"The rapid rise of generative AI is transforming virtually every aspect of life: health, energy, transportation, robotics, computer vision, commerce, learning and even national security," Penn said in a Feb. 13 press release. "This produces an urgent need for innovative, leading-edge AI engineers who understand the principles of AI and how to apply them in a responsible and ethical way."

The bachelor of science in AI offers coursework in machine learning, computing algorithms, data analytics and advanced robotics and will have students address questions about "how to align AI with our social values and how to build trustworthy AI systems," Penn professor Zachary Ives said.

"We are training students for jobs that don't yet exist in fields that may be completely new or revolutionized by the time they graduate," added Robert Ghrist, associate dean of undergraduate education in Penn Engineering.

FYI, the cost of an undergraduate education at Penn, which typically spans four years, is over $88,000 per year (including housing and food).

For those not heading to college or who haven't signed up for any of those online AI certificates, their AI upskilling may come courtesy of their current employee. The Boston Consulting Group, for its Feb. 9 report, What GenAI's Top Performer Do Differently, surveyed over 150 senior executives across 10 sectors. Generally:

Bottom line: companies are starting to look at existing job descriptions and career trajectories, and the gaps they're seeing in the workforce when they consider how gen AI will affect their businesses. They've also started offering gen AI training programs. But these efforts don't lessen the need for today's workers to get up to speed on gen AI and how it may change the way they work -- and the work they do.

In related news, software maker SAP looked at Google search data to see which states in the US were most interested in "AI jobs and AI business adoption."

Unsurprisingly, California ranked first in searches for "open AI jobs" and "machine learning jobs." Washington state came in second place, Vermont in third, Massachusetts in fourth and Maryland in fifth.

California, "home to Silicon Valley and renowned as a global tech hub, shows a significant interest in AI and related fields, with 6.3% of California's businesses saying that they currently utilize AI technologies to produce goods and services and a further 8.4% planning to implement AI in the next six months, a figure that is 85% higher than the national average," the study found.

Virginia, New York, Delaware, Colorado and New Jersey, in that order, rounded out the top 10.

Over the past few months, I've highlighted terms you should know if you want to be knowledgeable about what's happening as it relates to gen AI. So I'm going to take a step back this week and provide this vocabulary review for you, with a link to the source of the definition.

It's worth a few minutes of your time to know these seven terms.

Anthropomorphism: The tendency for people to attribute humanlike qualities or characteristics to an AI chatbot. For example, you may assume it's kind or cruel based on its answers, even though it isn't capable of having emotions, or you may believe the AI is sentient because it's very good at mimicking human language.

Artificial general intelligence (AGI): A description of programs that are as capable as -- or even more capable than -- than a human. While full general intelligence is still off in the future, models are growing in sophistication. Some have demonstrated skills across multiple domains ranging from chemistry to psychology, with task performance paralleling human benchmarks.

Generative artificial intelligence (gen AI): Technology that creates content -- including text, images, video and computer code -- by identifying patterns in large quantities of training data and then creating original material that has similar characteristics.

Hallucination: Hallucinations are unexpected and incorrect responses from AI programs that can arise for reasons that aren't yet fully known. A language model might suddenly bring up fruit salad recipes when you were asking about planting fruit trees. It might also make up scholarly citations, lie about data you ask it to analyze or make up facts about events that aren't in its training data. It's not fully understood why this happens but can arise from sparse data, information gaps and misclassification.

Large language model (LLM): A type of AI model that can generate human-like text and is trained on a broad dataset.

Prompt engineering: This is the act of giving AI an instruction so it has the context it needs to achieve your goal. Prompt engineering is best associated with OpenAI's ChatGPT, describing the tasks users feed into the algorithm. (e.g. "Give me five popular baby names.")

Temperature: In simple terms, model temperature is a parameter that controls how random a language model's output is. A higher temperature means the model takes more risks, giving you a diverse mix of words. On the other hand, a lower temperature makes the model play it safe, sticking to more focused and predictable responses.

Model temperature has a big impact on the quality of the text generated in a bunch of [natural language processing] tasks, like text generation, summarization and translation.

The tricky part is finding the perfect model temperature for a specific task. It's kind of like Goldilocks trying to find the perfect bowl of porridge -- not too hot, not too cold, but just right. The optimal temperature depends on things like how complex the task is and how much creativity you're looking for in the output.

Editors' note: CNET is using an AI engine to help create some stories. For more, seethis post.

View original post here:

AI and You: OpenAI's Sora Previews Text-to-Video Future, First Ivy League AI Degree - CNET

Posted in Artificial General Intelligence | Comments Off on AI and You: OpenAI’s Sora Previews Text-to-Video Future, First Ivy League AI Degree – CNET

Vitalik Buterin and Sandeep Nailwal headline decentralized agi summit @ Ethdenver tackling threats of centralized AI – Grit Daily

Posted: at 12:01 pm

Denver, USA, February 23rd, 2024, Chainwire

The Decentralized AGI Summit, organized by Sentient and Symbolic Capital, will bring together top thought leaders in Decentralized AI like Vitalik Buterin, Sandeep Nailwal, Illia Polosukhin, and Sreeram Kannan.

As the development of artificial general intelligence (AGI) systems accelerates, there are growing concerns that centralized AI controlled by a small number of actors poses a major threat to humanity. The inaugural Decentralized AGI Summit will bring together top experts in AI and blockchain like Vitalik Buterin, Sandeep Nailwal, Illia Polosukhin, Sreeram Kannanm, and more, to explore how decentralized, multi-stakeholder governance models enabled by blockchain technology can help make the development of AGI safer, more transparent and aligned with the greater good.

The rapid acceleration of centralized AI and its integration into everyday life has led humanity to a crossroads between two future worlds, says Sandeep Nailwal. On the one hand, we have the choice of a Closed World. This world is controlled by few, closed-source models run by massive mega corporations. On the other hand, we have the choice of an Open World. In this world, models are default open-source, inference is verifiable, and value flows back to the stakeholders. The Open World is the world we want to live in, but it is only possible by leveraging blockchain to make AI more transparent and just.

The Decentralized AGI Summit will take place on Monday, February 26th from 3-9pm MST. It is free and open to the public to attend at: https://decentralizedagi.org/.

We are excited to help facilitate this important discussion around the development of safe and ethical AGI systems that leverage decentralization and multi-stakeholder governance, said Kenzi Wang, Co-Founder and General Partner at Symbolic Capital. Bringing luminaries across both the AI and web3 domains together will help push forward thinking on this critical technological frontier.

Featured keynote speakers include:

Vitalik Buterin, Co-Founder of Ethereum Foundation

Sandeep Nailwal, Co-Founder of Polygon Labs

Illia Polosukhin, Co-Founder of Near Foundation

Sreeram Kannan, Founder of Eigenlayer

Topics will span technical AI safety research, governance models for AGI systems, ethical considerations, and emerging use cases at the intersection of AI and blockchain. The summit aims to foster collaboration across academic institutions, industry leaders and the decentralized AI community.

For more details and to register, visit https://decentralizedagi.org/.

About Sentient

Sentient is building a decentralized AGI platform. Sentients team is comprised of leading web3 founders, builders, researchers, and academics who are committed to creating trustless and open artificial intelligence models.

Learn more about Sentient here: https://sentient.foundation/

About Symbolic Capital

Symbolic Capital is a people-driven investment firm supporting the best web3 projects globally. Our team has founded and led some of the most important blockchain companies in the world, and we leverage this background to provide unparalleled support to the companies in our portfolio.

Learn more about Symbolic Capital here: https://www.symbolic.capital/

Sam Lehman[emailprotected]

Original post:

Vitalik Buterin and Sandeep Nailwal headline decentralized agi summit @ Ethdenver tackling threats of centralized AI - Grit Daily

Posted in Artificial General Intelligence | Comments Off on Vitalik Buterin and Sandeep Nailwal headline decentralized agi summit @ Ethdenver tackling threats of centralized AI – Grit Daily