The future of AI in music is now. Artificial Intelligence was in the music industry long before FN Meka. – Grid

Music has forever been moved by technology from the invention of the phonograph, to Bob Dylan pivoting from acoustic to electric guitar, to the ubiquity of streaming platforms and, most recently, an ambitious attempt at crossing AI with commercial music.

FN Meka, introduced in 2021 as a virtual rapper whose lyrics and beats were constructed with proprietary AI technology, had a promising rise.

But just days after he signed on with Capitol Records the label that carried The Beatles, Nat King Cole and The Beach Boys and released his debut track Florida Water, the record company dropped him. His pink slip was a response in part to fans and activists widely criticizing his image a digital avatar with face tattoos, green braids and a golden grill and decrying his blend of stereotypes and slur-infused lyrics.

The AI artist, voiced by a real person and created by a company called Factory New, was not, technologically, a groundbreaking experiment. But it was a needle-mover for a discussion that is imminent within the industry: How AI will continue to shape how we experience music.

In 1984, classical trombonist George Lewis used three Apple II computers to program Yamaha digital synthesizers to improvise along with a live quartet. The resulting record a syrupy and spacey co-creation of computer and human musicians was titled Rainbow Family, and is considered by many as the first instance of artificially intelligent music

In the years since, advances in mixing boards popularized the practice of sampling and interpolation igniting debates about remixing old songs to make new ones (art form or cheap trick?) and Auto-Tune became a central tool in singers recorded and onstage performances.

FN Meka isnt the only AI artist out there. Some have been introduced, and lasted, with less commercial backing. YONA, a virtual singer-songwriter and AI poet made by Ash Koosha, has performed live at music festivals around the globe, including MUTEK in Montreal, Rewire in the Netherlands and Barbican in the U.K.

In fact, the most crucial and successful partnerships between AI and music have been under the hood, said Patricia Alessandrini, a composer, sound artist and researcher at Stanford Universitys Center for Computer Research in Music and Acoustics.

During the pandemic, the music world leaned heavily on digital tools to overcome challenges of sharing and playing music while remote, Alessandrini said. JackTrip Virtual Studio, for example, was an online platform used to teach university music lessons while students were remote. It minimized time delay, making audiovisual synchronicity much easier, and was born from machine learning sound research.

And for producers who deal with large music files and digital compression, AI can play a role in signal processing, Alessandrini said. This is important for sound engineers and musicians alike, saving time and helping them more smoothly create, or export, big records.

There are beneficial applications for technology and music to intersect when it comes to accessibility, she said. Instruments have been made using AI to require less strength or pressure in order to generate sound, for example allowing those with injuries or disabilities to play with eye movements alone.

Alessandrinis own projects include the Piano Machine which uses computers and voltages as fingers to create new sounds and Harp Fingers, a technology that allows users to play a harp without physically touching it.

On a meta level, algorithms are the ubiquitous drivers of online streaming platforms Spotify, Apple Music, SoundCloud, YouTube and others are constantly using machine learning, in less transparent ways, to personalize playlists, releases, lists of nearby concerts and music recommendations.

Less agreed upon is the concept of an AI artist itself. Reactions have been split among those loyal to the humanity of art; some who argued that if certain artists were indistinguishable from AI, then they deserved to be replaced; others who invited the newness; and many whose feelings fall somewhere in between.

With any cultural form, part of what youre dealing with are peoples expectations for what things sound like or what an artist looks like, Oliver Wang, a music writer and sociology professor at California State University, Long Beach, told Grid.

Some experts argue that those questions leave out a critical point: Whatever the technology, there is always a human behind the work and that should count.

Sometimes people dont know or see how much human work is behind artificial intelligence, said Adriana Amaral, a professor at UNISINOS in Brazil and expert in pop culture, influencers and fan studies. Its a team of people developers, programmers, designers, people from production and marketing.

But this misunderstanding isnt always the fault of the public, said Alessandrini. It often comes down to marketing. Its more exciting to say that somethings made entirely by AI, Alessandrini said. This was how FN Meka was marketed and promoted online as an AI artist. But while his lyrics, sound and beats were AI-generated, they then were performed by a human and animated, cartoon-style.

If it sounds strange that one would become a dedicated fan of a virtual persona, it shouldnt, Amaral said. The world of competitive video gaming, which is nothing without its on-screen characters, is a multibillion-dollar industry that sells out arenas worldwide.

Still, music purists and audiophiles and any person who appreciates music as an experience, rather than just entertainment may very well resist AI musicians. In particular, Alessandrini said, AI is better at generating content faster and copying genres, though unable to innovate new ones a result of training their computing models, largely, using what music already exists.

When a rap artist has these different influences and their own specific cultural experience, then thats the kind of magical thing that they use to create, Alessandrini said. You can say that Bobby Shmurda is one of the first Brooklyn drill artists because of a particular song. So thats a [distinctly] human capacity, compared to AI.

Alessandrini likens this artistic experience to the advancements of AI in medicine the applications of robotic technologies used during surgeries that are more efficient and mitigate the risk of human error. But, she said, there are some things that humans do better caring for a patient, understanding their suffering.

Its hard to imagine AI vocals ever reaching the emotional and beautifully human depths, say, of a Nina Simone or Ann Peebles; or channeling the authentic camaraderie and bounce of a group like OutKast.

In 2017, the French government commissioned mathematician and politician Cdric Villani to lay ambitious groundwork for the countrys artificially intelligent (AI) future.

His strategy, one that considered economics, ethics and education, foremost straddled the thinning line between creation and consumption.

The division between the noncreative machine and the creative human is ever less clear-cut, he wrote. Creativity, he went on to say, was no longer just an artists skill it was a necessary tool for a world of co-inhabitance, machine and human together.

Is that what is happening?

One cant talk about music on grand scales without also talking about money. Though FN Meka was a failure, AI has strong ties to the music sphere that wont be broken because one AI rapper got cut from a label. And it feels inevitable that another big record company or music festival will give it a go.

Why? It might all come down to cost, say experts and music listeners who run the cynicism gamut.

Wang said he has a sneaking suspicion that record companies and executives see AI musicians as a way to save money on royalty payments and travel costs moving forward.

Beyond the money-hungry music industry, there is also room for a lot of good moving forward with AI, said Amaral. She hopes FN Mekas image, and how he was received, was a wake-up call for whatever AI artist inevitably comes next. She also mentioned YONA, which she saw in concert in Japan, as a thin, white, able pop star not unlike many who dominate the music scene today.

We have all the technological tools to make someone who could be green, or fat or any way we like, and we still are stuck on these patterns, she said.

What will the landscape look like five or 10 or 15 years from now? Wang asks. Pop music, despite peoples cynicism, rarely stays static. Its constantly changing, and perhaps these computer-based attempts at creating artists will be part of that change.

Thanks to Dave Tepps for copy editing this article.

Visit link:
The future of AI in music is now. Artificial Intelligence was in the music industry long before FN Meka. - Grid

Inauguration ceremony of CAAI Artificial Intelligence Research successfully held on CICAI 2022 – EurekAlert

image:Qionghai Dai and Lei Shi launched the journal. (Photo Credit: CICAI 2022) view more

Credit: CAAI Artificial Intelligence Research

On August 28th 2022, CAAI International Conference on Artificial Intelligence(CICAI 2022) was held in Beijing, China. CAAI Artificial Intelligence Research, an open access international academic journaljointly sponsored by the Chinese Associationfor Artificial Intelligence(CAAI)and Tsinghua University,was officially released at this conference.

The journal is one of the high-start new-journal-projects in the Excellence Action Plan of China Scienceand Technology Journals, aiming to reflect the state-of-the-artachievements in the field of artificial intelligence (AI) and itsapplications.The journal is published quarterly by Tsinghua University Press, and publicly released on SciOpen, aninternational digital publishing platform for science and technology journalsdeveloped by Tsinghua University Press.

Prof. Qionghai Dai, Editor-in-Chief of CAAI Artificial Intelligence Research, delivered a speech on the inauguration ceremony, andannounced the official release of the journal together with Mr. Lei Shi, Director of Journal Publishing, Tsinghua University Press. Prof. Fuchun Sun, Executive Editor-in-Chief of the journal, presided over the inauguration ceremony and improvised a poem ad-lib.

CAAI Artificial Intelligence Research aims to provide the AI researchers and practitionerswith an international stage to share the significant findings.Original research and review articles from all over the world arewelcome for rigorous peer-review and professional publishingsupport.

The editorial board of the journal consists of 24 top AI scientists among the world. After a double-blinded peer-review process, 3original researchand 3 review articles have been accepted in the firstissue, focusingon hotspot topics such as automatic speech recognition(ASR), noncooperative games, telecommunicationAI, metaverse, generative adversarial network(GAN), and multi-label image classification.

The articles are fully open access at the journal home:https://www.sciopen.com/journal/2097-194X.

Readers are invited to submit suggestions, feedback and questions to journal-ai@tsinghua.edu.cn.

##

About CAAI Artificial Intelligence Research

CAAI Artificial Intelligence Researchis a peer-reviewed journal jointly sponsored by Chinese Association for Artificial Intelligence (CAAI) and Tsinghua University. The journal aims to reflect the state-of-the-art achievement in the field of artificial intelligence and its application, including knowledge intelligence, perceptual intelligence, machine learning, behavioral intelligence, brain and cognition, and AI chips and applications, etc. Original research and review articles from all over the world are welcome for rigorous peer-review and professional publishing support.

About SciOpen

SciOpen is a professional open access resource for discovery of scientific and technical content published by the Tsinghua University Press and its publishing partners, providing the scholarly publishing community with innovative technology and market-leading capabilities. SciOpen provides end-to-end services across manuscript submission, peer review, content hosting, analytics, and identity management and expert advice to ensure each journals development by offering a range of options across all functions as Journal Layout, Production Services, Editorial Services, Marketing and Promotions, Online Functionality, etc. By digitalizing the publishing process, SciOpen widens the reach, deepens the impact, and accelerates the exchange of ideas.

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

Original post:
Inauguration ceremony of CAAI Artificial Intelligence Research successfully held on CICAI 2022 - EurekAlert

Master’s in Artificial Intelligence | Hopkins EP Online

With the expertise of the Johns Hopkins Applied Physics Lab, weve developed one of the nations first online artificial intelligence masters programs to prepare engineers like you to take full advantage of opportunities in this field. The highly advanced curriculum is designed to deeply explore AI areas, including computer robotics, natural language processing, image processing, and more.

We have assembled a team of top-level researchers, scientists, and engineers to guide you through our rigorous online academic courses. Because we are a hub and frontrunner in artificial intelligence, we can tailor our artificial intelligence online masters content to include the most up-to-date practices and offer core courses that address the AI-driven technologies, techniques, and issues that power our modern world.

The online masters in Artificial Intelligence program balances theoretical concepts with the practical knowledge you can apply to real-world systems and processes. Courses deeply explore areas of AI, including robotics, natural language processing, image processing, and morefully online.

At the programs completion, you will:

Go here to see the original:
Master's in Artificial Intelligence | Hopkins EP Online

Artificial Intelligence and Democratic Values: Next Steps for the United States – Council on Foreign Relations

More than fifty years after a research group at Dartmouth University launched work on a new field called Artificial Intelligence, the United States still lacks a national strategy on artificial intelligence (AI) policy. The growing urgency of this endeavor is made clear by the rapid progress of both U.S. allies and adversaries.

Europe is moving forward with two initiatives of far-reaching consequence. The EU Artificial Intelligence Act will establish a comprehensive, risk-based approach for the regulation of AI when it is adopted in 2023. Many anticipate that the EU AI Act will extend the Brussels Effect across the AI sector as the earlier European data privacy law, the General Data Privacy Regulation, did for much of the tech industry.

More on:

Technology and Innovation

Robots and Artificial Intelligence

The Council of Europe is developing the first international AI convention aiming to protect fundamental rights, democratic institutions, and the rule of law. Like the Council of Europe Convention on Cybercrime (COE) and the Privacy Convention, the AI Convention will be open for ratification by member and non-member states. The COE remains influential, as Canada, Japan, the United States, and several South American countries have signed onto the COE.

Net Politics

CFR experts investigate the impact of information and communication technologies on security, privacy, and international affairs.2-4 times weekly.

Digital and Cyberspace Policy program updates on cybersecurity, digital trade, internet governance, and online privacy.Bimonthly.

A summary of global news developments with CFR analysis delivered to your inbox each morning.Most weekdays.

A weekly digest of the latestfrom CFR on the biggest foreign policy stories of the week, featuring briefs, opinions, and explainers. Every Friday.

China is also moving forward with an aggressive regulatory strategy to complement its goal to be the world leader in AI by 2030. China recently matched the GDPR with the Personal Information Protection Law and a new regulation on recommendation algorithms with similar provisions to the EUs Digital Services Act. The Chinese regulatory model will likely influence countries in Africa and Asia, part of the Belt and Road Initiative, and give rise to a possible Beijing Effect.The United States has done an admirable job maintaining a coherent policy in the Executive Branch over the Obama, Trump, and Biden administrations, highlighting key values and promoting an aggressive research agenda. In the 2019 Executive Order on Maintaining American Leadership in AI, the United Statessaid it would foster public trust and confidence in AI technologies and protect civil liberties, privacy, and American values in their application. Promoting the Use of AI in the Federal Government established the principles for the development and use of AI consistent with American values and are beneficial to the public.

The United States also played a leading role at the Organization for Economic Cooperation and Development (OECD) with the development and adoption of the OECD AI Principles, the first global framework for AI policy. Those principles, which emphasize human-centric and trustworthy AI, were later adopted by the G-20 nations, and are now endorsed by more than 50 countries, including Russia and China.

But the United States was out of the loop when the UN Educational, Scientific, and Cultural Organization (UNESCO) adopted the Recommendation on AI Ethics, now the most comprehensive framework for global AI policy which addresses emerging issues, such as AI and climate and gender equity.

Democratic values is a key theme as the United States seeks to draw a sharp distinction between the deployment of technologies that advance open, pluralist societies and those that centralize control and enable surveillance. As Secretary Blinken explained last year, More than anything else, our task is to put forth and carry out a compelling vision for how to use technology in a way that serves our people, protects our interests and upholds our democratic values. But absent a legislative agenda or clear statement of principles, neither allies nor adversaries are clear about the U.S. AI policy objectives.

More on:

Technology and Innovation

Robots and Artificial Intelligence

The United States has run into similar problems with the Trade and Technology Council (TTC), an effort to align U.S. and EU tech policy around shared values. The inaugural Joint Statement laid a foundation for cooperation on AI for the EU and the United States in the fall of 2021, but Ukraine has upended transatlantic priorities, and it remains unclear at this point whether the TTC will regain focus on a common AI policy.

A similar challenge confronts EU and U.S. leaders on new rules for transatlantic data flows. After two earlier decisions from the high court in Europe, finding that the United States lacked adequate privacy protection for the transfer of personal data, lawmakers on both sides of the Atlantic worried that data flows could be suspended, as the Irish privacy commissioner has recently threatened. President Biden and President von der Leyen announced an agreement in principle in May, but several months later there is still no public text for review.

To restore leadership in the AI policy domain, the United States should move forward the policy initiative launched last year by the Office of Science and Technology Policy (OSTP). The science office outlined many of the risks of AI, including embedded bias and widespread surveillance, andcalled for an AI Bill of Rights. OSTP said, Our country should clarify the rights and freedoms we expect data-driven technologies to respect. The White House supported the initiative and encouraged Americans toJoin the Effort to Create A Bill of Rights foran AutomatedSociety.

We strongly support this initiative. After anextensive review of the AI policies and practices in 50 countries, we identified the AI Bill of Rights as possibly the most significant AI policy initiative in the United States. But early progress has stalled. The delay has real consequences for Americans who are subject to automated decision-making in their everyday lives, with little transparency or accountability. Foreign governments are also looking for U.S. leadership in this rapidly evolving field. Progress on the AI Bill of Rights initiative will help build trust and restore U.S. leadership.

Last year, the Office of Science and Technology Policy stated clearly, "Powerful technologies should be required to respect our democratic values and abide by the central tenet that everyoneshould be treated fairly.That should be the cornerstone of a U.S. national AI policy, and that policy will advance international norms for the governance of AI.

Marc Rotenberg is President of the Center for AI and Digital Policy (CAIDP), author the forthcoming Law of Artificial Intelligence (West Academic 2023), and a Life Member of CFR. Merve Hickok is the Research Director of CAIDP and founder of the AIethicist.org

Continue reading here:
Artificial Intelligence and Democratic Values: Next Steps for the United States - Council on Foreign Relations

SCOPA: Intersection of artificial intelligence and telemedicine – Optometry Times

Optometry Times' Alex Delaney-Gesing speaks with Leo P. Semes, OD, FAAO, professor emeritus of optometry at the University of Alabama-Birmingham, on the highlights and key takeaways from his discussion titled "Artificial intelligence and telemedicine," presented during the 115th annual South Carolina Optometric Physicians Association (SCOPA) meeting in Hilton Head, South Carolina.

Editor's note: this transcript has been lightly edited for clarity.

Could you share a highlights version of your presentation?

Artificial Intelligence (AI) is a topic that I've been following for probably 5 or so years. And as I dug into the history, it's quite interesting; it really began back in the 1930s. So it has quite a long history. It's based on algorithms and whether that algorithm is something as simple as how you do addition of big numbers or long division

The algorithm for looking at, for example, a patient with diabetic retinopathy, is specifying the severity of that, and then using that as a determination for treatment. And then if the patient is treated, following that patient to see if there is stagnation, stability of the diabetic retinopathy, or regression, which is what we're hoping for.

And some of the AI paradigms now demonstrate that there is the possibility of regression of diabetic retinopathy, from a physical standpoint, of how the retina looks, and also in terms of visual performance. And that's what to me is probably the most exciting aspect of what we can do with AI; to say, Okay, this is a patient who's got a certain level of diabetic retinopathy, the patient qualifies for treatment. Then 3 months following treatment, yes, the retina looks better, but they have improvement in visual performance.

So visual acuityquantitativelynumbers look better. And as a consequence of that, patients could enjoy a better lifestyle.

Why would you say this is such an important topic of discussion? Well, one of the reasons is thataside from age-related macular degeneration (AMD) one of the major causes of vision loss, especially among the working age population. is secondary to diabetic retinopathy (DR). And it's estimated that there's a segment of the population perhaps as high as 25%, who have pre-diabetes. So patients presenting for a vision exam, or vision irregularities, or even a periodic examination, might be discovered with certain changes that relate to DR. And then a diagnosis is made and the patient can be managed systemically, as well as ocularly.

What are the key takeaways you'd like attendees to learn from this?Probably the biggest thing is going to be the new staging paradigms for DR and how those relate to when a patient is going to need treatment. And if the patient is not at high risk and not a candidate for treatment, then emphasizing to the patient the importance of maintenance of systemic management strategies, and regular ophthalmic exams.

Read the original:
SCOPA: Intersection of artificial intelligence and telemedicine - Optometry Times

Human-level AI is a giant risk. Why are we entrusting its development to tech CEOs? – Salon

Technology companies are racing to develop human-level artificial intelligence, whose development poses one of the greatest risks to humanity. Last week, John Carmack, a software engineer and video game developer, announced that he has raised 20 million dollars to start Keen Technologies, a company devoted to building fully human-level AI. He is not the only one. There are currently 72 projects around the world focused on developing a human-level AI, also known as an AGI meaning an AI which can do any cognitive task at least as well as humans can.

Many have raised concerns about the effects that even today's use of artificial intelligence, which is far from human-level, already has on our society. The rise of populism and the Capitol attack in the United States, the Tigray War in Ethiopia, increased violence against Kashmiri Muslims in India, and a genocide directed toward Rohingya in Myanmar, have all been linked to the use of artificial intelligence algorithms in social media. Social media sites employing these technologies showed a proclivity for showing hateful content to users because it identified such posts as popular and thus profitable for social media companies; this, in turn, caused egregious harm. This shows that even for current AI, deep concern for safety and ethics are crucial.

But the plan of cutting-edge tech entrepreneurs is now to build way more powerful human-level AI, which will have much larger effects on society. These effects could, in theory, be very positive: automating intelligence could for example release us from work that we prefer not to do. But the negative effects could be as large or even larger.

Oxford academic Toby Ord spent close to a decade trying to quantify the risks of human extinction due to various causes, and summarized the results in a book aptly titled "The Precipice." Supervolcanoes, asteroids, and other natural causes, according to this rigorous academic work, have only a slight chance of leading to complete human extinction. Nuclear war, pandemics, and climate change rank somewhat higher. But what trumps this apocalyptic ranking exercise? You guessed it: human-level artificial intelligence.

And it's not just Ord who believes that full human-level AI, as opposed to today's relatively impotent vanilla version, could have extremely dire consequences. The late Stephen Hawking, tech CEOs such as Elon Musk and Bill Gates, and AI academics such as the University of California San Francisco's Stuart Russell, have all warned publicly that human-level AI could lead to nothing short of disaster, especially if developed without extreme caution and deep consideration of safety and ethics.

And who's now going to build this extremely dangerous technology? People like John Carmack, a proponent of "hacker ethics" who previously programmed kids' video games like "Commander Keen." Is Keen Technologies now going to build human-level AI with the same regard for safety? Asked on Twitter about the company's mission, Carmack replied "AGI or bust, by way of Mad Science!"

A democratic society should not let tech CEOs determine the future of humanity without regard for ethics or safety.

Carmack's lack of concern for this kind of risk is nothing new. Before starting Keen Technologies, Carmack worked side by side with Mark Zuckerberg at Facebook, the company responsible for most of the harmful impacts of AI described earlier. Facebook applied technology to society without any regard for the consequences, fully in line with their motto "Move fast and break things." But if we are going to build human-level AI that way, the thing to be broken might be humanity.

In the interview with computer scientist Lex Fridman where Carmack announces his new AGI company, Carmack shows outright disdain for anything that restricts the unfettered development of technology and maximization of profit. According to Carmack, "Most people with a vision are slightly less effective." Regarding the "AI ethics things," he says: "I really stay away from any of those discussions or even really thinking about it." People like Carmack and Zuckerberg might be good programmers, but are simply not wired to take the big picture into account.

If they can't, we must. A democratic society should not let tech CEOs determine the future of humanity without regard for ethics or safety. Therefore, we all have to inform ourselves about human-level AI, especially non-technologists. We have to reach a consensus on whether human-level AI indeed poses an existential threat to humanity, as most AI Safety and existential risk academics say. And we have to find out what to do about it, where some form of regulation seems inevitable. The fact that we don't know yet what manner of regulation would effectively reduce risk should not be a reason for regulators to not address the issue but rather a reason to develop effective regulation with the highest priority. Nonprofits and academics can help in this process. Notdoing anything and thus letting people like Carmack and Zuckerberg determine the future for all of us could very well lead to disaster.

Read more

on artificial intelligence

Read more:
Human-level AI is a giant risk. Why are we entrusting its development to tech CEOs? - Salon

Headroom Solves Virtual Meeting Fatigue with Artificial Intelligence that Eliminates Wasted Time and Reveals Essential Highlights – Business Wire

SAN FRANCISCO--(BUSINESS WIRE)--Headroom, a meeting platform leveraging artificial intelligence to improve communications and productivity, today announced a $9 million investment led by Equal Opportunity Ventures with participation from Gradient Ventures, LDV Capital, AME Cloud Ventures and Morado Ventures. The capital brings total funding to date to $14 million and will be used to expand Headrooms team, product development and mobile offering. The company also recently added new Shareable Automatic Summaries to its suite of tools for remote and hybrid meetings, furthering its mission to support balanced, entertaining, productive and memorable meetings.

Virtual meetings have become the de facto method for gathering, connection and collaboration. According to Fortune Business Insights, the meeting collaboration market is expected to exceed $41 billion by 2029. Gartner predicts that by 2025, 75% of conversations at work will be recorded and analyzed, enabling the discovery of added organizational value or risk. Yet despite the increase in meetings, productivity and engagement rates are down. Even before the start of the pandemic, a Harvard Business Review survey revealed 65% of senior managers felt meetings kept them from completing their own work and 64% said meetings come at the expense of deep thinking. Smarter meetings may be the biggest opportunity for improved work productivity and satisfaction.

The more meetings held, the more time wasted, with too many people spending time in redundant meetings. Headroom is leveraging AI to help companies do more with less, enabling individual workers to be more productive, choose which meetings to attend and which to watch later, or just quickly get the key pieces of information discussed, said Julian Green, CEO and Co-Founder of Headroom. Particularly in this environment, where for startups every dollar and every meeting minute counts, those that can move faster and stay better connected with people wherever they are, in real time and asynchronously, will win.

Headroom is self-learning; its relevance and impact on productivity improves with use. Headroom data shows 90% of every meeting lacks useful information. To maximize the 10% meeting content that is helpful, the company developed Shareable Automatic Summaries which auto-generate highlight reels that provide key moments, shared notes and action items, and enable easy sharing with others. Additional platform functionality that maximizes synchronous and asynchronous communication includes:

"Hybrid work is here to stay and virtual meetings are the norm, but they allow for a wide margin of distraction," said Roland Fryer, Founder and Managing Partner at Equal Opportunity Ventures and newly appointed Headroom Board Member. Headroom at its core is an engagement and productivity platform - streamlining collaboration and information sharing, without a heavy lift. It saves time in scheduling, reporting and collaborating."

Simply put: meetings should be better. Unlike any other video communication and collaboration platform, Headroom is stateful. Meeting information is generated during live conversations, and can be augmented and accessed forever after. Participants are free to act naturally and engage with the information without being restricted by the actual meeting slot, said Andrew Rabinovich CTO and Co-Founder of Headroom. Those who didn't attend the meeting itself, have all the details readily available to them. With Headroom, this is automated and highlights go to non-attendee stakeholders who can replay key decisions. Our customers are also using it as an information resource they can search for key information later.

Headroom was co-founded by Julian Green and Andrew Rabinovich in 2020. The companys executive team experience spans founding and leadership roles at GoogleX, Houzz, Magic Leap, Patreon and Square. Headrooms platform currently serves more than 5,000 customers spanning technology and online education startups, as well as marketing, design, consulting and recruiting agencies. It is free with no usage caps or storage limits, and is available on Google Chrome with no download or app required. Users have full control over sharing of meeting information. Get started at https://www.goheadroom.com/.

ABOUT HEADROOM

Headroom, founded in 2020, is improving communication in meetings by augmenting meeting intelligence. Automated virtual meetings in Headroom allow attendees to act naturally, replay key decisions, build smart summaries and search everything later. Headroom is brought to you by an experienced team that has created and managed AI products used by billions of people at tech startups and large companies including Google and Magic Leap. The founders helped create the worlds leading Computer Vision, Augmented Reality and Virtual Reality products, started Unicorns, and have won a Webby. To get started with Headroom visit https://www.goheadroom.com/.

Visit link:
Headroom Solves Virtual Meeting Fatigue with Artificial Intelligence that Eliminates Wasted Time and Reveals Essential Highlights - Business Wire

Before Python was imposed, these were the languages with which artificial intelligence was developed – Gearrice

Today, learn artificial intelligence has almost become synonymous with learning to program in Python. This programming language created by Guido Van Rossum in 1991 is, by far, the most used today in artificial intelligence projects, especially in the field of machine learning.

It helps this, in addition to its popularity as a general programming language (and also in related fields, such as data analysis) that all great AI libraries (Keras, TensorFlow, SciPy, Pandas, Scikit-learn, etc) are designed to work with Python.

Nevertheless, artificial intelligence is much older than python, and there were other languages that stood out in this field for decades before his arrival. Lets take a look at what they were:

The Information Processing Language (IPL) is a low-level language (almost as low as assembly) that was created in 1956 in order to show that the expressive theorems in the Principia Mathematica by mathematicians and philosophers Bertrand Russell and Alfred North Whitehead could be proved by resorting to the computing.

IPL introduced in programming characteristics that are still fully valid today, such as symbols, recursion or the use of lists. The latter, a data type so flexible that it allowed a list to be introduced as an element of another list (which in turn could introduce another list as an element, etc.) was fundamental when it came to use it to develop the first AI programsWhat Logic Theorist (1956) or the chess program SSN (1958).

Despite its importance in the history of AI, several factors (the first being the complexity of its syntax) made it quickly replaced by the following language of the list.

LISP is the oldest of the programming languages dedicated to artificial intelligence among those that are still in use; and it is also the second high-level programming language in history: was created in 1958 (one year after FORTRAN and one year before COBOL) by John McCarthy, who two years earlier had already been responsible for coining the term artificial intelligence.

Shortly before, McCarthy had developed a language called FLPL (FORTRAN List Processing Language), an extension of FORTRAN, and decided to collect in a single language the high-level nature of FLPL, all the innovations provided by IPL, and the formal system known as lambda calculus. . The result was named LISP (for LISt Processor).

At the same time that he was developing FLPL, McCarthy was also formulating so-called alpha-beta pruning, a search technique that reduces the number of evaluated nodes in a game tree. And, to implement it, introduced such a fundamental element in programming as structures if-then-else.

Programmers quickly fell in love with the freedom it offered them. the flexibility of this language, and its facet as a prototyping tool. A) Yes, for the next quarter of a century, LISP became the reference language in the field of AI. Over time, LISP fragmented into a whole series of dialects still in force in various fields of computing, such as Common LISP, EMACS LISP, Clojure, Scheme or Racket.

The language PROLOG (from French programming in logic), which we have already told you about on other occasions, was born at a hard time for the development of artificial intelligenceat the gates of the first AI Winter, when the initial furor over the applications of this technology crashed against the skepticism caused by the lack of progress, which generated public and private disinvestment in its development.

Specifically, it was created in 1972 by French computer engineering professor Alain Colmeraurer, with the aim of introducing the use of Horn clauses, a formula of propositional logic, into software development. Although globally it never became as widely used as LISP, it did become the main AI development language in its home continent (as well as in Japan).

Being a language based on the declarative programming paradigm like LISP, on the other hand, its syntax is very different from that of typical imperative programming languages like PythonJava, or C++.The ease that PROLOG provides in handling recursive methods and pattern matching caused IBM will bet on implementing PROLOG in its IBM Watson for natural language processing tasks.

PROLOG code example in the SWI-Prolog IDE.

An earlier version of this article was published in 2021.

The rest is here:
Before Python was imposed, these were the languages with which artificial intelligence was developed - Gearrice

Researchers Using Artificial Intelligence to Assist With Early Detection of Autism Spectrum Disorder – University of Arkansas Newswire

Photo by University Relations

Khoa Luu and Han-Seok Seo

Could artificial intelligence be used to assist with the early detection of autism spectrum disorder? Thats a question researchers at the University of Arkansas are trying to answer. But theyre taking an unusual tack.

Han-Seok Seo, an associate professor with a joint appointment in food science and the UA System Division of Agriculture, and Khoa Luu, an assistant professor in computer science and computer engineering, will identify sensory cues from various foods in both neurotypical children and those known to be on the spectrum. Machine learning technology will then be used to analyze biometric data and behavioral responses to those smells and tastes as a way of detecting indicators of autism.

There are a number of behaviors associated with ASD, including difficulties with communication, social interaction or repetitive behaviors. People with ASD are also known to exhibit some abnormal eating behaviors, such as avoidance of some if not many foods, specific mealtime requirements and non-social eating. Food avoidance is particularly concerning, because it can lead to poor nutrition, including vitamin and mineral deficiencies. With that in mind, the duo intend to identify sensory cues from food items that trigger atypical perceptions or behaviors during ingestion. For instance, odors like peppermint, lemons and cloves are known to evoke stronger reactions from those with ASD than those without, possibly triggering increased levels of anger, surprise or disgust.

Seo is an expert in the areas of sensory science, behavioral neuroscience, biometric data and eating behavior. He is organizing and leading this project, including screening and identifying specific sensory cues that can differentiate autistic children from non-autistic children with respect to perception and behavior. Luu isan expert in artificial intelligence with specialties in biometric signal processing, machine learning, deep learning and computer vision. He will develop machine learning algorithms for detecting ASD in children based on unique patterns of perception and behavior in response to specific test-samples.

The duo are in the second year of a three-year, $150,000 grant from the Arkansas Biosciences Institute.

Their ultimate goalis to create an algorithm that exhibits equal or better performance in the early detection of autism in children when compared to traditional diagnostic methods, which require trained healthcare and psychological professionals doing evaluations, longer assessment durations, caregiver-submitted questionnaires and additional medical costs. Ideally, they will be able to validate a lower-cost mechanism to assist with the diagnosis of autism. While their system would not likely be the final word in a diagnosis, it could provide parents with an initial screening tool, ideally eliminating children who are not candidates for ASD while ensuring the most likely candidates pursue a more comprehensive screening process.

Seo said that he became interested in the possibility of using multi-sensory processing to evaluate ASD when two things happened: he began working with a graduate student, Asmita Singh, who had background in working with autistic students, and the birth of his daughter. Like many first-time parents, Seo paid close attention to his newborn baby, anxious that she be healthy. When he noticed she wouldnt make eye contact, he did what most nervous parents do: turned to the internet for an explanation. He learned that avoidance of eye contact was a known characteristic of ASD.

While his child did not end up having ASD, his curiosity was piqued, particularly about the role sensitivities to smell and taste play in ASD. Further conversations with Singh led him to believe fellow anxious parents might benefit from an early detection tool perhaps inexpensively alleviating concerns at the outset. Later conversations with Luu led the pair to believe that if machine learning, developed by his graduate student Xuan-Bac Nguyen, could be used to identify normal reactions to food, it could be taught to recognize atypical responses, as well.

Seo is seeking volunteers 5-14 years old to participate in the study. Both neurotypical children and children already diagnosed with ASD are needed for the study. Participants receive a $150 eGift card for participating and are encouraged to contact Seo athanseok@uark.edu.

About the University of Arkansas:As Arkansas' flagship institution, the UofA provides an internationally competitive education in more than 200 academic programs. Founded in 1871, the UofA contributes more than$2.2 billion to Arkansas economythrough the teaching of new knowledge and skills, entrepreneurship and job development, discovery through research and creative activity while also providing training for professional disciplines. The Carnegie Foundation classifies the UofA among the few U.S. colleges and universities with the highest level of research activity.U.S. News & World Reportranks the UofA among the top public universities in the nation. See how the UofA works to build a better world atArkansas Research News.

More here:
Researchers Using Artificial Intelligence to Assist With Early Detection of Autism Spectrum Disorder - University of Arkansas Newswire

Chips-Plus Artificial Intelligence In The CHIPS Act Of 2022 – New Technology – United States – Mondaq

26 August 2022

Akin Gump Strauss Hauer & Feld LLP

To print this article, all you need is to be registered or login on Mondaq.com.

On August 9, 2022, President Biden signed the CHIPS Act of 2022(the "Act"), legislation to fund domestic semiconductormanufacturing and boost federal scientific research and development(see our previous alert for additional background). Aspart of its science-backed provisions, the Act includes many of theU.S. Innovation and Competition Act's (USICA) originalpriorities, such as promoting standards and research anddevelopment in the field of artificial intelligence (AI) andsupporting existing AI initiatives.

The Act directs the National Institute of Standards andTechnology (NIST) Director to continue supporting the developmentof AI and data science and to carry out the National AI InitiativeAct of 2020 (previous alert for additional background),which created a coordinated program across the federal governmentto accelerate AI research and application to support economicprosperity, national security, and advance AI leadership in theUnited States. The Director will further the goals of the NationalAI Initiative Act of 2020 by:

Furthermore, the Act provides that the Director may establishtestbeds, including in virtual environments, in collaboration withother federal agencies, the private sector and colleges anduniversities, to support the development of robust and trustworthyAI and machine learning systems.

A new National Science Foundation (NSF) Directorate forTechnology, Innovation and Partnerships (the"Directorate") is established under the Act to addresssocietal, national and geostrategic challenges for the bettermentof all Americans through research and development, technologydevelopment and related solutions. Over the next five years, thenew Directorate will receive $20 billion in funding. Moreover, theDirectorate will focus on 10 key technology focus areas, includingAI, machine learning, autonomy, related advances, robotics,automation, advanced manufacturing and quantum computing, amongother areas.

Within the Department of Energy (DOE), the Act authorizes $11.2billion for research, development and demonstration activities andto address energy-related supply chain activities in the ten keytechnology focus areas prioritized by the new NSF Directorate.Further, the Act authorizes $200 million for the DOE's Officeof Environmental Management to conduct research, development anddemonstration activities, including the fields of AI andinformation technology.

The Act directs NSF Director to submit to the relevant House andSenate congressional committees a report outlining the need,feasibility and plans for implementing a program for recruiting andtraining the next generation of AI professionals. The report willevaluate the feasibility of establishing a federal AIscholarship-for-service program to recruit and train the nextgeneration of AI professionals.

The Akin Gump cross-practice AI team continues to activelymonitor forthcoming congressional and administrative initiativesrelated to AI.

The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circumstances.

POPULAR ARTICLES ON: Technology from United States

Go here to read the rest:
Chips-Plus Artificial Intelligence In The CHIPS Act Of 2022 - New Technology - United States - Mondaq