Monthly Archives: September 2022

AI will Transform your Quality Operations – IQVIA

Posted: September 29, 2022 at 12:39 am

Artificial intelligence is at the heart of pharmas digital transformation. When deployed correctly, AI algorithms reduce manual labor, speed access to insights, and provides quality teams with more robust information in less time to support more effective decision-making.

But it only works if the AI is proven reliable, and users trust the output.

If you're using AI in a critical system it has to be reliable, and it has to be designed in a way that the worst-case scenario is not going to lead to failure, said Matt O'Donnell, Global Lead, Life Sciences ISV Partners at Microsoft.

O'Donnell recently participated in the IQVIA webinar: Futureproofing Your Quality Operations Through Digital Transformation, where he and Mike King, Senior Director, Product & Strategy at IQVIA discussed the evolution of AI in pharma, and how it is driving innovation in the quality environment.

For most pharma leaders, this transformation has only just begun. While the majority of webinar attendees believe that AI in quality operations will improve process time and consistency of performance (54%), 72% admitted that they arent yet using AI in their quality operations.

That set the tone for the conversation, where ODonnell and King discussed how AI is being used in pharma today, where it can add value, and how companies can access all of the benefits of AI in a safe and compliant way.

Below are some of the insights we captured from their conversation.

ODonnell: AI simulates human decision-making and imitates human intelligence to deliver stronger, more accurate, and more repeatable solutions. It can copy all of the cognitive senses, and combine deep knowledge and search capabilities to identify connections that might not have been known before.

King: AI can also help capture insights from structured data in a more timely manner. For example, many companies have a record of submission pathways for audits of nonconformances, or of CAPA (Corrective and Preventive Actions) closures, which results in a huge volume of data. Using AI, they can mine precedents in that data to understand what decisions were taken historically, what potential pathways lead to certain outcomes, and where additional pathways or alternatives could bring faster, safer decision making. That's where AI can really provide insights across the quality management systems that we operate in.

ODonnell: Consider vision. So much information in healthcare is captured in a semi-structured way. It might be in a PDF, or a patient scan, or a handwritten record. That's valuable information but to unlock it, you first need to use Optical Character Recognition to scan it and turn it into a structured document so the system can process the data.

King: The key is combining the technology with the knowledge of our teams to make better decisions that are more predictable and more consistent. There are great opportunities to use targeted algorithms to identify patterns and trends that may not be seen by the human eye.

King: Connected intelligence is the bringing together of systems driven by intelligence, with the support of AI to rapidly focus on those insights that we may not yet understand. AI enables connected intelligence so that we can capture relevant insights in a timely manner to drive the right actions.

ODonnell: That speed is essential, especially for quality surveillance of a drug or medical device in the market. Monitoring for early indications of adverse events requires reviewing thousands of documents to find connections that make sense. AI can find those connections, making it easy for quality teams to gain knowledge from semi structured and unstructured data.

ODonnell: We are often asked whether AI is appropriate for healthcare, and whether it is sufficient. I always point to the progress weve made. Over the last six years, AI systems have reached near human parity for general purpose. And while the medical domain is more challenging, the next wave of transformation in this space is already happening, bringing AI at scale.

We already see great accuracy with natural language processing, which when it's trained on medical publications can deliver 90+ percent accuracy. And when an algorithm is trained for a highly specific medical task, like identifying and tracking brain tumors, it can be extremely accurate.

We continue to make improvements, by using the trillions of healthcare documents we have access to train algorithms and create better simulations of human intelligence.

King: Its also important that stakeholders be able to trust the technology. For an organization to adopt AI, it must quantify the benefits in terms of consistency, resource utilization, and in identifying things that the human eye and the human brain cannot find on its own. Only when organizations quantify that benefit and present it to their various stakeholders is there an opportunity for true adoption.

King: Many companies take a staged approach. Some start with the safety field, using AI to manage adverse event reports and to process product complaints; where others may start by embedding AI in a quality management system to support audits, inspections, and document drafting. It depends on where the opportunity lies for the organization, where their strengths lie, and where those discussions land with senior stakeholders.

ODonnell: There is a shift going on right now. It's not just about one company producing great AI models. I see collaborations between academia, different companies, and different technology providers. They are coming together to produce the best AI models that can advance benefits for the entire human population.

King: There are so many applications for AI, and so many opportunities to use sophisticated algorithms to help organizations enhance the focus on patient safety by being able to detect signals that we may not otherwise see. Once companies weigh the benefits, the potential risks and the costs they will see the incredible value of AI and how it can enhance activities across the operation.

To hear more of their conversation, click here to listen to the entire webinar, or contact Regulatory_Quality_Compliance@iqvia.com to learn more.

View post:

AI will Transform your Quality Operations - IQVIA

Posted in Ai | Comments Off on AI will Transform your Quality Operations – IQVIA

Is AI Making the Creative Class Obsolete? – dot.LA

Posted: at 12:39 am

As artificial intelligence becomes more advanced, AI image and writing generators are becoming more widespread, even taking on creative tasks some once thought uniquely human.

These tools have limitations. AI-created images sometimes appear half-finished (look no further than DALL-Es early renderings of faces), and AI-generated writing can sound like garble written by, well, a robot.

The surge in AI use for creative work like copywriting and developing art has some in the creative fields concerned about losing their jobs, going the way of the traditional animator at Pixar. Reports like one published in 2021 by San Mateo-based job discovery platform Zippia dont help with statements like, AI could take the jobs of as many as one billion people globally and make 375 million jobs obsolete over the next decade and half of all companies currently utilize AI in some fashion.

Using AI to create open-source art available to the masses wasnt on the radar for many until the release of the text-to-image creator DALL-E Mini last summer. The release coincided with the Washington Posts profile of Google engineer Blake Lemoine, who claimed Googles Language Model for Dialogue Applications (LAMDA) was sentient.

AI innovations like GPT-3a large language model which uses deep learning to produce original textare touted as solutions to a host of problems with little discussion about drawbacks or limitations. One notable example is the widely-used writing assistant Grammarly, which uses a combination of artificial intelligence techniques, including deep learning and natural language processing.

Hour Ones Natalie Monbiot says creatives shouldnt be concerned about AI.

It's normal to feel anxious about it, and it will be a realistic concern for those whose actual work can be done more cheaply, quickly, and consistently via machines, says Monbiot, who is head of strategy for the avatar video generation platform.

These new technologies are new tools, she says, like the pen, the typewriter, computers, and so on.

Monbiot says that as AI becomes more instrumental to creators work, there will be a higher premium on creativity (which is distinctly human) and less on execution.

Kris Ruby of Ruby Media Group, a PR agency, tells dot.LA that users go wrong with AI writing products by trusting them to produce finished work. That is not how the tools are supposed to be used, Ruby says.

According to Ruby, users of text-to-image generation tools like DALL-E Mini and Midjourney make the mistake of calculating the cost of the software subscriptionbut not the number of hours it takes to get even one useable image.

Austin-based Jasper.ais CEO Dave Rogenmoser says these applications eliminate the mundane elements of the content creation process. Jasper develops multiple AI-powered writing tools and recently added a text-to-image creator to its suite.

It isnt a replacement for creators or the creative process, he says, rather, its a trusty sidekick in the content process that helps bring ideas to life faster and in a more efficient way.

San Francisco-based Writer.com is an AI writing assistant focused on corporate clients. Its CEO, May Habib, tells dot.LA that creators have more to gain from the tools than they have to lose.

Like any tool, it is about depth: AI writing tools are most powerful in the hands of those who are already pretty skilled, but still pretty useful for everyone, Habib says.

We dont think AI is going to take away real writing jobs, she continues, but it will speed up ideation and drafting.

Is there a danger of overselling AI before it can meet companies expectations?

Habibs answer? Absolutely. Consumers should not expect artificial intelligence to solve all their problems. Applications powered by AI cant feel like magic, she says; they have to feel like technology."

AI expert Mikaela Pisani is the Chief Data Scientist for Los Angeles-based Rootstrap, which develops apps for startups. Asked if it was realistic for creators to worry about losing jobs to artificial intelligence, Pisani says, AI is becoming increasingly creative and can help creatives generate content ideas at scale.

When it comes to fears that AI might replace creators, Pisani notes that Creativity is defined as 'the ability to produce or use original and unusual ideas.

To think outside of the box is implicitly hard to do for machines, Pisani says, since AI are trained on available information. Therefore, our creative brain won't be replaced by AI in the near future, since it is too challenging for machines to recreate innovation. By extension, AI does not create a final piece of art, but it can be used as a co-creator.

Pisanis perspective isnt that different from execs behind AI-fueled startups. She says that because artificial intelligence can multitask rapidly, it could also be a source of inspiration for artists.

Writers, musicians, designers, or artists, Pisani continues, shouldn't be afraid of being replaced but should make themselves aware of these AI tools that can help their creativity reach a new level of scale."

So far, the consensus seems to be that AI is just an instrument, not a replacement for human artistry.

Its still early, though, and artificial intelligence use is evolving fast. Just last week, Vanity Fair reported that 91-year-old James Earl Jones is retiring from voicing Darth Vader for future Star Wars shows and movies. His replacement? Respeecher, AKA voice cloning powered by artificial intelligence. The Ukraine-based company says its product leverages recent revolutionary advances in artificial intelligence to create voice swaps [that] are virtually indistinguishable from the original and never sound robotic.

One thing seems clear: AI is here to stay.

From Your Site Articles

Related Articles Around the Web

Continued here:

Is AI Making the Creative Class Obsolete? - dot.LA

Posted in Ai | Comments Off on Is AI Making the Creative Class Obsolete? – dot.LA

Survey: IT Pros Remain Conflicted Over AI’s Potential, Peril – PCMag

Posted: at 12:39 am

Companies are increasingly turning to artificial intelligence (AI) to automate and optimize business functions. But according to recent research, the IT professionals who will be asked to implement the technology have decidedly mixed feelings about it, ranging from optimism to outright dread (and sometimes both at the same time).

That's according to the 2023 State of IT report(Opens in a new window) from PCMag's sister site Spiceworks Ziff Davis(Opens in a new window) (SWZD). For its research, the company asked 968 IT buyers from businesses in North America and Europe whether their organizations currently used AI or planned to do so. Among those who answered affirmatively, answers to follow-up questions were revealing.

On the positive side, many IT pros see AI as a beneficial technology that can help advance their careers. Fully 74% of survey respondents agreed with the statement, "AI will automate tasks and enable more time to focus on strategic IT initiatives." In other words, they have faith that AI tools will free them from the more mundane chores of their roles and allow them to concentrate on tasks that add value to the business.

Other opinions were more sanguine, with 67% saying "AI will be a mission-critical element of our business strategy in the years to come." (Fair enough.)

(Credit: Spiceworks Ziff Davis)

Still others seemed to be envisioning a science-fiction future that resembles movies more than reality. When asked to respond to the prompt, "I expect to work alongside intelligent robots/machines in the next 5 years," 62% of those surveyed responded yes.

What does it all mean? Clearly, the IT professionals surveyed see AI usage in modern business as an inevitability. As the cost of entry of AI continues to trend downward, business software vendors will increasingly offer AI capabilities as differentiating features.

Then again, the same IT pros surveyed by SWZD saw serious potential downsides to the growth of AI. Just over half of the respondents agreed with the statement, "AI will put IT jobs at risk." As was the case with earlier phases of IT automation, some professionals fear that AI technologies could eventually become so effective that it will put humans out of work.

Even more survey respondents were concerned about how AI will be used for data analysis, particularly when it comes to user data. The prompt, "AI will create major data privacy issues" drew agreement from 55% of respondents.

But some respondents' fears run even deeper. A remarkable 49% agreed with the statement, "Innovation in AI presents an existential threat to humanity"perhaps recalling storylines from dystopian science fiction. They wouldn't be alone; no less than Tesla and SpaceX billionaire Elon Musk famously described AI as "summoning the demon."

Whatever their personal feelings, however, most survey respondents seemed to agree that AI is here to stay, citing applications ranging from data analytics and automation to security intrusion and fraud detection, natural language processing, web and social media analytics, and more.

Editors' Note: Spiceworks is owned by Ziff Davis, the parent company of PCMag.

Sign up for What's New Now to get our top stories delivered to your inbox every morning.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.

Continue reading here:

Survey: IT Pros Remain Conflicted Over AI's Potential, Peril - PCMag

Posted in Ai | Comments Off on Survey: IT Pros Remain Conflicted Over AI’s Potential, Peril – PCMag

Hear game-changing AI and ML leaders at the iMerit ML DataOps Summit – TechCrunch

Posted: at 12:39 am

What do more than 2,000 data scientists, engineers and machine learning professionals have in common? Theyre all getting ready to hear from a fantastic lineup of speakers at the iMerit ML DataOps Summit on November 8.

Over the course of the summit, youll hear from 18 of the the most influential and forward-thinking leaders in AI, data science, engineering and ML. Were highlighting just three today, but be sure to check out all the speakers and learn more about them.

Pro tip: This is a free online event. Register now, mark your calendar and get ready for an exciting deep dive into the ML DataOps landscape.

Ready? Lets shine the data spotlight on three of the sectors leading movers and makers.

Abhijit Bose, Capital One

Abhijit Bose, currently managing VP and head of the center for machine learning at Capital One, has led AI/ML engineering teams at some of the largest tech and financial services firms such as Facebook and JP Morgan. He understands the criteria for building impactful ML platforms and what is top of mind for ML engineering teams today.

As a leading voice in machine learning in the enterprise, Bose is passionate about building world-class organizations and enterprise-wide AI platforms that advance capabilities in personalization, recommendations, ad targeting, marketing sciences and fraud/anomaly detection.

Sriram Subramanian, Microsoft

Subramanian is currently the global lead for Data and AI domain at the FastTrack for Azure group within Microsoft. Before joining Microsoft, he was a research director at IDC covering AI / ML lifecycle management software. Major themes of his research included MLOps, Trustworthy AI, AI Build and Data Labeling software.

Prior to IDC, Subramanian founded and served as principal analyst at CloudDon, an independent market research and advisory services firm, where his research focused on advising vendors and buyers on cloud-native technologies and stacks.

Vinesh Sukumar, Qualcomm Technologies

Dr. Vinesh Sukumar currently serves as senior director of product management at Qualcomm Technologies, Inc. As head of AI/ML, he leads AI product definition, strategy and solution deployment across multiple business units.

Sukumars nearly 20 years of industry experience spans across research, engineering and application deployment. He holds a doctorate degree specializing in imaging and vision systems and an MBA focused on strategy and marketing. A regular speaker at many AI industry forums, Sukumar has authored several journal papers and two technical books.

The iMerit ML DataOps Summit takes place on November 8 and will be presented across two time zones (North America and APAC). Dont miss this opportunity to learn from some of the best minds in AI, data science, engineering and ML. Register for free today!

See the rest here:

Hear game-changing AI and ML leaders at the iMerit ML DataOps Summit - TechCrunch

Posted in Ai | Comments Off on Hear game-changing AI and ML leaders at the iMerit ML DataOps Summit – TechCrunch

AI is nothing without skilled human oversight – BetaNews

Posted: at 12:39 am

Artificial Intelligence (AI) remains hard to define. When it comes to a definition of "intelligence", context is vital and it starts with what we want the AI system to do. It is specific to the application. For example, intelligence for a search engine shouldnt be the same as intelligence for an autonomous vehicle.

Now, with AI systems already in widespread production for more than a quarter of enterprises, businesses must ensure that employees are upskilled to effectively define and implement AI systems, and understand how to manage these systems safely in the workplace. But what does that look like in practice?

Invest in engagement, training and upskilling

The range of AI applications is vast, and there will be few that match the power of LaMDA and other such LLMs, for example, GPT-3 or OPT-175B. However, the story of LaMDAs 'human' conversation further highlights that organizations must be mindful of how they engage with AI systems. Such conversations must be had across the workforce before misinformation, fear, or skepticism takes hold. Beyond that, organizations must also invest in greater engagement, training and upskilling around AI -- and this must be holistic.

Over the next five years, we can expect an explosion of specialized bots within the workplace; employees will be exposed to systems that can make decisions and use language in amazing ways. However, not all employees will embrace this new world, the threat of man-to-machine replacement looms large. For those whose roles may significantly change due to the implementation of automation, it will be vital to encourage the development of a growth mindset.

This is where employees are primed for AI up-skilling by presenting the future as a positive challenge and how AI skills will support their future career growth and success. Mindset will be a huge differentiator going forward, and companies that educate employees early and cultivate a positive AI culture will enjoy manifold benefits. This can include decisively identifying positive AI use cases early and clarifying how these implementations will benefit employees, for example, through reduced time on repetitive or mundane tasks.

The time saved on performing admin tasks can instead be used by employees to learn new skills and impact the business in a new, innovative way. For example, AI can take on repetitive, administrative tasks, such as reporting. However, it is then for the organization to enable their employees to replace that work with more engaging and strategic activities. And, when it comes to AI, it will not just be technical training thats required. Employees will also need to develop new skills to help identify new business opportunities harnessing the technology and take an active role in communication around these technologies, their benefits and risks. Either way, training will be integral.

Holistic training

As UKRI (UK Research & Innovation) highlights, "To make a success of data and AI, organizations need to look at the full AI project supply chain. This starts with identifying a business opportunity that can benefit from AI all the way through to the validation, implementation, testing and deployment. Once the product or service has been deployed, organizations must consider longer-term adoption, maintenance, risks, governance."

To realize the benefits of AI, organizations must invest in holistic training across this chain. Leaders must be clear about what AI can and cannot do, what it should and should not do, and invest in the essential role of human oversight and understanding in making AI viable.

This investment includes ensuring learning delivers clear benefits to employees and organizations alike, providing the foundations for future-proof careers built on meaningful work.

What is clear then, is that sentience is not the goal. It is to deliver better outcomes -- for organizations, employees and society alike. That starts with engaging workforces in holistic AI learning now.

Image credit:AlienCat/depositphotos.com

Mike Loukides is VP of Emerging Tech at OReilly.

Read the original here:

AI is nothing without skilled human oversight - BetaNews

Posted in Ai | Comments Off on AI is nothing without skilled human oversight – BetaNews

Superconductivity Model With 100000 Equations Now Contains Just 4 Thanks to AI – ScienceAlert

Posted: at 12:39 am

Electrons whizzing through a grid-like lattice don't behave at all like pretty silver spheres in a pinball machine. They blur and bend in collective dances, following whims of a wave-like reality that are hard enough to imagine, let alone compute.

And yet scientists have succeeded in doing just that, capturing the motion of electrons moving about a square lattice in simulations that until now had required hundreds of thousands of individual equations to produce.

Using artificial intelligence (AI) to reduce that task down to just four equations, physicists have made their job of studying the emergent properties of complex quantum materials a whole lot more manageable.

In doing so, this computing feat could help tackle one of the most intractable problems of quantum physics, the 'many-electron' problem, which attempts to describe systems containing large numbers of interacting electrons.

It could also advance a truly legendary tool for predicting electron behavior in solid state materials, the Hubbard model all the while bettering our understanding of how handy phases of matter, such as superconductivity, occur.

Superconductivity is a strange phenomenon that arises when a current of electrons flow unimpeded through a material, losing next to no energy as they slip from one point to another. Unfortunately most practical means of creating such a state rely on insanely low temperatures, if not ridiculously high pressures. Harnessing superconductivity closer to room temperature could lead to far more efficient electricity grids and devices.

Since achieving superconductivity under more reasonable conditions remains a lofty goal, physicists have taken to using models to predict how electrons could behave under various circumstances, and therefore which materials make suitable conductors or insulators.

These models have their work cut out for them. Electrons don't roll through the network of atoms like tiny balls, after all, with clearly defined positions and trajectories. Their activity is a mess of probability, influenced not only by their surroundings but by their history of interactions with other electrons they've bumped into on the way.

When electrons interact, their fates can become intimately intertwined, or 'entangled'. Simulating the behavior of one electron means tracking the range of possibilities of all electrons in a model system at once, which makes the computational challenge exponentially harder.

The Hubbard model is a decades-old mathematical model that describes the confusing motion of electrons through a lattice of atoms somewhat accurately. Over the years and much to physicists' delight, the deceptively simple model has been experimentally realized in the behavior of a wide array of complex materials.

With ever-increasing computer power, researchers have developed numerical simulations based on Hubbard model physics that allow them to get a grip on the role of the topology of the underlying lattice.

In 2019, for instance, researchers proved the Hubble Model was capable of representing superconductivity higher-than-ultra-cold temperatures, giving the green light to researchers to use the model for deeper insights into the field.

This new study could be another big leap, greatly simplifying the number of equations required. Researchers developed a machine-learning algorithm to refine a mathematical apparatus called a renormalization group, which physicists use to explore changes in a material system when properties such as temperature are altered.

"It's essentially a machine that has the power to discover hidden patterns," physicist and lead author Domenico Di Sante, of the University of Bologna in Italy, says of the program the team developed.

"We start with this huge object of all these coupled-together differential equations" each representing pairs of entangled electrons "then we're using machine learning to turn it into something so small you can count it on your fingers," Di Sante says of their approach.

The researchers demonstrated that their data-driven algorithm could efficiently learn and recapitulate dynamics of the Hubbard model, using only a handful of equations four to be precise and without sacrificing accuracy.

"When we saw the result, we said, 'Wow, this is more than what we expected.' We were really able to capture the relevant physics," says Di Sante.

Training the machine learning program using data took weeks, but Di Sante and colleagues say it could now be adapted to work on other, tantalizing condensed-matter problems.

The simulations thus far only capture a relatively small number of variables in the lattice network, but the researchers expect their method should be fairly scalable to other systems.

If so, it could in the future be used to probe the suitability of conducting materials for applications that include clean energy generation, or to aid in the design of materials that may one day deliver that elusive room-temperature superconductivity.

The real test, the researchers note, will be how well the approach works on more complex quantum systems such as materials in which electrons interact at long distances.

For now, the work demonstrates the possibility of using AI to extract compact representations of dynamic electrons, "a goal of utmost importance for the success of cutting-edge quantum field theoretical methods for tackling the many-electron problem," the researchers conclude in their abstract.

The research was published in Physical Review Letters.

The rest is here:

Superconductivity Model With 100000 Equations Now Contains Just 4 Thanks to AI - ScienceAlert

Posted in Ai | Comments Off on Superconductivity Model With 100000 Equations Now Contains Just 4 Thanks to AI – ScienceAlert

Even an AI thinks using AI to write your homework is a bad idea – PC Gamer

Posted: at 12:39 am

Kids on Reddit have been telling tales of using OpenAI's Playground to get straight A grades in their homework. It's no secret, but when someone asked the same AI its thoughts on how it was used in this schoolyard cheating scheme, it actually made some pretty good arguments against its own use.

This tweet (opens in new tab) from MIT's Computer Science & Artificial Intelligence Laboratory (CSAIL), shows OpenAI's answer to the following prompt: "Explain the moral and social issues with using AI to do your homework."

Spoiler: it's one of resounding negativity.

"They may not be learning the material as well as they could be," the AI writes. "This could lead to problems down the road when they are expected to know the material for exams or real-world applications.

"Additionally, using AI to do homework could lead to cheating."

No sh*t. Unless you've been assigned a project specifically about using AI for school, it's definitely cheating. Maybe not if you're only using it to help generate ideas, rather than writing entire essays.

I did find a couple of references to using AI for homework across Reddit, along with a few questions about how helpful it might be from prospective cheaters, but one Reddit post sticks out as the post MIT was potentially referencing.

Urdadgirl69's post (opens in new tab), headlined "Artificial Intelligence allows me to get straight A's," talks about using the tool to write essays, answer questions about movies and books for school projects, and even making "$100 profit by 'doing' homework for other classmates," they say. "Now I am looked at as a genius."

OpenAI's Playground tool is an AI training platform that allows users to create and train AI models. It's often used by researchers and developers to experiment with new AI models and applications. Now the kids have cottoned on, they're using it to turn a profit among classmates it seems.

It's a diamond hustle for sure, but as the AI explains "It could create an unfair advantage over other students who are doing their homework on their own." Imagine half the kids in class suddenly shifting from Ds to As in a single week. A bit sus if you ask me, and you'd hope the teacher would realise something was up.

Here's what the Playground text-davinci-002 model AI gave us, as part of a request for an article about kids using AI to do their homework:

"The implications of using AI to do your homework are far-reaching. For one, it could lead to a future where AI is used to do all kinds of things that humans currently do, including things like homework.

Your next machine

Best gaming PC (opens in new tab): The top pre-built machines from the prosBest gaming laptop (opens in new tab): Perfect notebooks for mobile gaming

"This could lead to a future where humans are no longer needed to do things like homework, and instead AI does it all. This could have a major impact on society, as it would likely lead to large-scale unemployment and a decrease in the quality of life for many people. It would also have a major impact on education, as it would likely lead to a decrease in the need for human teachers."

Then it took a surprisingly positive turn, and it wasn't as doom and gloom as when MIT asked it a similar question.

"It could free up a lot of time for people who currently spend a lot of time doing homework. It could also lead to better grades for people who use AI to help with their homework."

Although I'm sure teachers will have something to say about it, the AI isn't wrong. Still, it's imperative for us to have these conversations right now as there's a very real fear arising about the idea that humans could end up supplanted by AI. Artificial intelligence has already been giving people unfair advantages in art competitions (opens in new tab), and the moral implications are far reaching.

But that's a discussion for another time.

Go here to read the rest:

Even an AI thinks using AI to write your homework is a bad idea - PC Gamer

Posted in Ai | Comments Off on Even an AI thinks using AI to write your homework is a bad idea – PC Gamer

Experts warn AI assistants are hurting the social development of children – Digital Trends

Posted: at 12:39 am

The likes of Google Assistant and Alexa have been at the receiving end of privacy-related concerns for a while now, yet they continue to make inroads inside millions of homes. But it appears that they might also have a detrimental impact on the growth of children when it comes to their psycho-social development and acquiring core skills.

According to an analysis by experts from the University of Cambridges School of Clinical Medicine, interaction with AI assistants affects children in three ways. Starting at the bottom of the chain is the hindrance posed to learning opportunities.

AI assistants made by Amazon, Apple, and Google continue to improve at a scary pace, and with each passing year, their ability to pull up relevant answers from the web is also gaining momentum. With such ease at their disposal, experts believe that the traditional process of hunting and absorbing knowledge has taken a backseat.

The real issue here is that when children pose a query before an elder person, be it their parents or teachers, they are often asked about the context and reasoning behind their inquiry. Plus, when a person searches for an answer, they develop a critical approach as well as logical reasoning for parsing the right kind of information and the scope of their imagination also widens.

Children have poor understanding of how information is retrieved from the internet, where the internet is stored, and the limitations of the internet, said the report. With such a chain of faith placed on the internet, it becomes a lot easier for young minds to absorb false information.

The cesspool of misinformation plaguing the internet needs no introduction, and platforms continue to struggle to contain it but AI assistants are making matters worse. A Stanford research project in 2021 found that the likes of Alexa, Google Assistant, and Siri provide a different set of answers and search results related to health queries. Adults can be trusted with making educated decisions in such a scenario, but children are at extremely high risk here.

Next in line is stunted social growth. Human-to-human conversations help refine social etiquette and allow children to learn how to behave the right way in the world out there. Chatting with a digital assistant doesnt offer that privilege.

In a nutshell, AI assistants offer a poor path to learning social interactions, despite advances like natural language processing and Googles LaMDA innovation. Google Assistant can talk to you naturally, just like another person, but it cant teach basic manners to children and train them on how to conduct themselves like decent human beings.

For example, there is no incentive for learning polite terms like please when talking to a virtual assistant living inside a puck-sized speaker, nor is there any constructive feedback possible. In the pandemic-driven times that we live in, the scope for real human interactions has further shrunk, which poses an even bigger risk to the social development of young minds.

Finally, there is the problem of inappropriate responses. Not all guardians have the digital skills to set strict boundaries around parental software controls. This risks exposing kids to content that is not age-appropriate and could lead them straight to harmful information that could be hazardous. Per a BBC report from 2021, Amazons Alexa once put a 10-year-old kids life at risk by challenging them to touch a live circuit part with a metallic coin.

The rest is here:

Experts warn AI assistants are hurting the social development of children - Digital Trends

Posted in Ai | Comments Off on Experts warn AI assistants are hurting the social development of children – Digital Trends

How robots and AI are helping develop better batteries – MIT Technology Review

Posted: at 12:39 am

Historically, researchers in materials discovery have devised and tested options through some mix of hunches, informed speculation, and trial by error. But its a difficult and time-consuming process simply given the vast array of possible substances and combinations, which can send researchers down numerous false paths.

In the case of electrolyte ingredients, you can mix and match them in billions of ways, says Venkat Viswanathan, an associate professor at Carnegie Mellon, a co-author of the Nature Communications paper, and a cofounder and chief scientist at Aionics. He collaborated with Jay Whitacre, director of the universitys Wilton E. Scott Institute for Energy Innovation and the co-principal investigator on the project, along with other Carnegie researchers to explore how robotics and machine learning could help.

The promise of a system like Clio and Dragonfly is that it can rapidly work through a wider array of possibilities than human researchers can, and apply what it learns in a systematic way.

Dragonfly isnt equipped with information about chemistry or batteries, so it doesnt bring much bias to its suggestions beyond the fact that the researchers select the first mixture, Viswanathan says. From there, it runs through a wide variety of combinations, from mild refinements of the original to completely out-of-the-box suggestions, homing in on a mix of ingredients that delivers better and better results against its programmed goal.

In the case of battery experiments, the Carnegie Mellon team was looking for an electrolyte that would speed up the recharging time for batteries. The electrolyte solution helps shuttle ionsor atoms with a net charge due to the loss or gain of an electronbetween the two electrodes in a battery. During discharge, lithium ions are created at the negative electrode, known as the anode, and flow through the solution toward the positive electrode, the cathode, where they gain electrons. During charging, that process is reversed.

See the article here:

How robots and AI are helping develop better batteries - MIT Technology Review

Posted in Ai | Comments Off on How robots and AI are helping develop better batteries – MIT Technology Review

Where Will C3.ai Stock Be in 3 Years? – The Motley Fool

Posted: at 12:39 am

C3.ai (AI 0.23%) was one of the hottest tech debuts of 2020. But today, the enterprise artificial intelligence (AI) software company's stock trades nearly 70% below its initial public offering (IPO) price. C3.ai lost its luster as investors fretted over its slowing growth, ongoing losses, and high valuations. Rising interest rates exacerbated that pain. But could this out-of-favor stock recover over the next three years?

C3.ai only expects its revenue to rise 1% to 7% in fiscal 2023, which ends next April. That would represent a severe slowdown from its 38% growth in fiscal 2022 and 17% growth in fiscal 2021.

The company mainly attributes that slowdown to macroeconomic headwinds. That's because it provides most of its AI algorithms, which can be integrated into an organization's existing software infrastructure or sold as stand-alone services, to large customers in the macro-sensitive energy and industrial sectors.

Image source: Getty Images.

However, C3.ai also generates a large portion of its revenue from a joint venture (JV) with energy giant Baker Hughes. Approximately a third of C3.ai's revenue through fiscal 2025 will still likely come from Baker Hughes, based on Wall Street's top-line expectations and the current terms of the joint venture. This deal, which was renegotiated to be extended for an extra year last October, will expire in fiscal 2025.

Three troubling hints indicate this partnership could be in trouble: Baker Hughes already renegotiated lower revenue commitments to extend the agreement last year, it divested its own equity stake in C3.ai, and it invested in C3.ai's competitor Augury instead. If Baker Hughes walks away from the JV, C3.ai's revenue will plummet.

To diversify away from Baker Hughes and other large customers, C3.ai is aggressively pursuing smaller contracts from smaller customers. It also recently announced it would pivot away from subscriptions toward a usage-based model that only charges customers whenever they access its services.

However, that strategic shift raised eyebrows because enterprise software companies generally prefer to pursue larger customers, which generate higher revenue, and lock them in with sticky subscriptions. C3.ai has also gone through three CFOs since its IPO, and each CFO has slightly modified its customer counting methods and other key growth metrics.

C3.ai's slowing growth, customer concentration, management issues, mixed strategies, and ongoing losses all convinced investors that its stock didn't deserve a premium valuation. At its peak in late 2020, C3.ai was valued at $17 billion, or 93 times the sales it would actually generate in fiscal 2021. Today, it's worth just $1.4 billion, or five times this year's sales.

During C3.ai's latest conference call in late August, CEO Tom Siebel warned that its customers "appear to be expecting a recession" as they reined in their orders. Siebel also warned that the potential downturn "could be significant" and throttle its near-term growth.

Siebel believes that after rising just 1% to 7% in fiscal 2023, C3.ai's revenue will "revert to historical annual growth rates" of more than 30% in fiscal 2024 "and beyond." CFO Juho Parkkinen, who took the position in February, claims that its shift toward smaller usage-based contracts will stabilize its long-term growth. A recent expansion of its partnership with Alphabet's Google Cloud, which bundles C3.ai's AI services with the tech giant's cloud services, could also boost its sales.

Yet analysts aren't as optimistic. They expect C3.ai's revenue to rise 3% in fiscal 2023, 21% in fiscal 2024, and 19% in fiscal 2025. Those growth rates are still robust relative to its current price-to-sales ratio, but its sales could still drop off a cliff in fiscal 2026 if Baker Hughes ends its closely watched partnership.

Assuming that C3.ai matches analysts' expectations for $376 million in revenue in fiscal 2025, and it's still trading at about five times sales by then, it could be worth about $1.9 billion in three years -- which would represent a gain of nearly 40% from its current price but remain well below its IPO valuation of about $4 billion.

C3.ai's stock could rise even higher if investors are willing to pay a higher premium again, but I don't see that happening until it renews its deal with Baker Hughes, significantly reduces the energy giant's weight on its top line, stops switching CFOs and reporting methods, and proves that its pursuit of smaller usage-based customers actually makes sense.

Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Leo Sun has positions in Alphabet (A shares) and C3.ai, Inc. The Motley Fool has positions in and recommends Alphabet (A shares) and Alphabet (C shares). The Motley Fool recommends C3.ai, Inc. The Motley Fool has a disclosure policy.

Read the original here:

Where Will C3.ai Stock Be in 3 Years? - The Motley Fool

Posted in Ai | Comments Off on Where Will C3.ai Stock Be in 3 Years? – The Motley Fool