Page 64«..1020..63646566..7080..»

Category Archives: Ai

WHO Releases Policy Brief on AI and Ageism – Healthcare Innovation

Posted: February 15, 2022 at 5:10 am

On Feb. 9, the World Health Organization (WHO) announced a new policy brief entitled, Ageism in artificial intelligence for health. The brief discusses legal, non-legal, and technical measures that can be used to minimize the risk of worsening or creating ageism through artificial intelligence (AI) technologies.

The press release on the policy brief states that AI technologies are revolutionizing many fields including public health and medicine for older people where they can help predict health risks and events, enable drug development, support the personalization of care management, and much more.

The release adds that there are concerns that AI, if left unmonitored, could further ageism and challenge the quality of healthcare that older individuals receive. Older individuals are often underrepresented in AI data and there are flawed assumptions surrounding how older people live or interact with technology.

That said, The following eight considerations could ensure that AI technologies for health address ageism and that older people are fully involved in the processes, systems, technologies and services that affect them.

Alana Officer, unit head, demographic change and healthy ageing for WHO was quoted in the release saying that The implicit and explicit biases of society, including around age, are often replicated in AI technologies. To ensure that AI technologies play a beneficial role, ageism must be identified and eliminated from their design, development, use and evaluation. This new policy brief shows how.

The release concludes that the policy brief aligns with the Global report on ageism that was produced by WHO in collaboration with Office of the United Nations High Commissioner for Human Rights, United Nations Department of Economic and Social Affairs, and United Nations Population Fund. The report launched in March 2021.

More here:

WHO Releases Policy Brief on AI and Ageism - Healthcare Innovation

Posted in Ai | Comments Off on WHO Releases Policy Brief on AI and Ageism – Healthcare Innovation

Humans and AI: Problem finders and problem solvers – TechTalks

Posted: at 5:09 am

Last weeks announcement of AlphaCode, DeepMinds source codegenerating deep learning system, created a lot of excitementsome of it unwarrantedsurrounding advances in artificial intelligence.

As Ive mentioned in my deep dive on AlphaCode, DeepMinds researchers have done a great job in bringing together the right technology and practices to create a machine learning model that can find solutions to very complex problems.

However, the sometimes-bloated coverage of AlphaCode by the media highlights the endemic problems with framing the growing capabilities of artificial intelligence in the context of competitions meant for humans.

For decades, AI researchers and scientists have been searching for tests that can measure progress toward artificial general intelligence. And having envisioned AI in the image of the human mind, they have turned to benchmarks for human intelligence.

Being multidimensional and subjective, human intelligence can be difficult to measure. But in general, there are some tests and competitions that most people agree are indicative of good cognitive abilities.

Think of every competition as a function that maps a problem to a solution. Youre provided with a problem, whether its a chessboard, a go board, a programming challenge, or a science question. You must map it to a solution. The size of the solution space depends on the problem. For example, go has a much larger solution space than chess because it has a larger board and a bigger number of possible moves. On the other hand, programming challenges have an even vaster solution space: There are hundreds of possible instructions that can be combined in nearly endless ways.

But in each case, a problem is matched with a solution and the solution can be weighed against an expected outcome, whether its winning or losing a game, answering the right question, maximizing a reward, or passing the test cases of the programming challenge.

When it comes to us humans, these competitions really test the limits of our intelligence. Given the computational limits of the brain, we cant brute-force our way through the solution space. No chess or go player can evaluate millions or thousands of moves at each turn in a reasonable amount of time. Likewise, a programmer cant randomly check every possible set of instructions until one results in the solution to the problem.

We start with a reasonable intuition (abduction), match the problem to previously seen patterns (induction), and apply a set of known rules (deduction) continuously until we refine our solution to an acceptable solution. We hone these skills through training and practice, and we become better at finding good solutions to the competitions.

In the process of mastering these competitions, we develop many general cognitive skills that can be applied to other problems, such as planning, strategizing, design patterns, theory of mind, synthesis, decomposition, and critical and abstract thinking. These skills come in handy in other real-world settings, such as business, education, scientific research, product design, and the military.

In more specialized fields, such as math or programming, tests take on more practical implications. For example, in coding competitions, the programmer must decompose a problem statement into smaller parts, then design an algorithm that solves each part and put it all back together. The problems often have interesting twists that require the participant to think in novel ways instead of using the first solution that comes to mind.

Interestingly, a lot of the challenges youll see in these competitions have very little to do with the types of code programmers write daily, such as pulling data from a database, calling an API, or setting up a web server.

But you can expect a person who ranks high in coding competitions to have many general skills that require years of study and practice. This is why many companies use coding challenges as an important tool to evaluate potential hires. Otherwise said, competitive coding is a good proxy for the effort that goes into making a good programmer.

When competitions, games, and tests are applied to artificial intelligence, the computational limits of the brain no longer apply. And this creates the opportunity for shortcuts that the human mind cant achieve.

Take chess and go, two board games that have received much attention from the AI community in the past decades. Chess was once called the drosophila of artificial intelligence. In 1996, DeepBlue defeated chess grandmaster Garry Kasparov. But DeepBlue did not have the general cognitive skills of its human opponent. Instead, it used the sheer computational power of IBMs supercomputers to evaluate millions of moves every second and choose the best one, a feat that is beyond the capacity of the human brain.

At the time, scientists and futurists thought that the Chinese board game go would remain beyond the reach of AI systems for a good while because it had a much larger solution space and required computational power that would not become available for several decades. They were proven wrong in 2016 when AlphaGo defeated go grandmaster Lee Sedol.

But again, AlphaGo didnt play the game like its human opponent. It took advantage of advances in machine learning and computation hardware. It had been trained on a large dataset of previously played gamesa lot more than any human can play in their entire life. It used deep reinforcement learning and Monte Carlo Tree Search (MCTS)and again the computational power of Googles serversto find optimal moves at each turn. It didnt do a brute-force survey of every possible move like DeepBlue, but it still evaluated millions of moves at every turn.

AlphaCode is an even more impressive feat. It uses transformersa type of deep learning architecture that is especially good at processing sequential datato map a natural language problem statement to thousands of possible solutions. It then uses filtering and clustering to choose the 10 most-promising solutions proposed by the model. Impressive as it is, however, AlphaCodes solution-development process is very different from that of a human programmer.

When thought of as the equivalent of human intelligence, advances in AI lead us to all kinds of wrong conclusions, such as robots taking over the world, deep neural networks becoming conscious, and AlphaCode being as good as an average human programmer.

But when viewed in the framework of searching solution spaces, they take on a different meaning. In each of the cases described above, even if the AI system produces outcomes that are similar to or better than those of humans, the process they use is very different from human thinking. In fact, these achievements prove that when you reduce a competition to a well-defined search problem, then with the right algorithm, rules, data, and computation power, you can create an AI system that can find the right solution without going through any of the intermediary skills that humans acquire when they master the craft.

Some might dismiss this difference as long as the outcome is acceptable. But when it comes to solving real-world problems, those intermediary skills that are taken for granted and not measured in the tests are often more important than the test scores themselves.

What does this mean for the future of human intelligence? I like to think of AIat least in its current formas an extension instead of a replacement for human intelligence. Technologies such as AlphaCode cannot think about and design their own problemsone of the key elements of human creativity and innovationbut they are very good problem solvers. They create unique opportunities for very productive cooperation between humans and AI. Humans define the problems, set the rewards or expected outcomes, and the AI helps by finding potential solutions at superhuman speed.

There are several interesting examples of this symbiosis, including a recent project in which Googles researchers formulated a chip floor-planing task as a game and had a reinforcement learning model evaluate numerous potential solutions until it found an optimal arrangement. Another popular trend is the emergence of tools like AutoML, which automate aspects of developing machine learning models by searching for optimal configurations of architecture and hyperparameter values. AutoML is making it possible for people with little experience in data science and machine learning to develop ML models and apply them to their applications. Likewise, a tool like AlphaCode will provide programmers to think more deeply about specific problems, formulate them into well-defined statements and expected results, and have the AI system generate novel solutions that might suggest new directions for application development.

Whether these incremental advances in deep learning will eventually lead to AGI remains to be seen. But whats for sure is that the maturation of these technologies will gradually create a shift in task assignment, where humans become problem finders and AIs become problem solvers.

See the rest here:

Humans and AI: Problem finders and problem solvers - TechTalks

Posted in Ai | Comments Off on Humans and AI: Problem finders and problem solvers – TechTalks

Adopting AI can be difficult, but there is a new option for CTOs – IT World Canada

Posted: at 5:09 am

Its no secret that adopting artificial intelligence can be difficult, which discourages some companies from even attempting to use AI in their businesses. However, there might now be a new option for using AI: AI as a service.

AI is a computer programming methodology that allows the computer to make decisions for itself. This can be done in a number of ways, but the most common approach is machine learning. In machine learning, a computer is given data and can eventually learn how to make decisions based on that data.

Machines that learn is an enormous development. Google CEO Sundar Pichai believes artificial intelligence could have more profound implications for humanity than electricity or fire.

When done well, with strategy and execution, customized AI solutions can differentiate a company and create a competitive advantage.

However, for all its potential, most companies struggle to adopt AI. Failure rates for AI projects are high at 80 per cent or more. There are a few reasons for this. First, data is often siloed within companies, making it difficult for AI systems to learn from it.

Second, AI requires specialized talent that is hard to find, and which has to be managed in a different way than most IT personnel. IT teams are used to managing engineers working towards specific goals, such as releasing a feature. By contrast, data scientists are expected to ask and answer questions, such as why something happens. The answers are not known in advance, and the goals are often moving targets.

Third, even when done well, it can take years to realize the benefits. Given this, we can understand why companies sometimes hesitate to initiate these types of projects.

There are other options available, promising to deliver you what your company needs cheaper, faster and better that you ever could on your own.This is known as AI as a Service (AIaaS) for niche problems related to: sales, marketing, human resources, finance and IT.

AIaaS is the delivery of artificial intelligence services through the cloud. This means that companies can access AI services without having to increase their headcount, or add to their overhead. There is no need to invest in the training of your own developers. You dont need to be responsible for owning all the components of your IT stack.

There are many different AI as a service providers, each with their own offerings. In general, these are the main benefits of working with an AIaaS vendor:

Before rushing out to sign up for one of these services, it is important to do your research before selecting a provider, because not all of them are created equal. More generally, there are some other risks you should consider:

Companies face a trade-off when it comes to investments in AI.On the one hand, with custom solutions comes the possibility of differentiation, but the necessary commitment and the risk of failure are high. On the other hand, it is possible to reduce some of those risks with AIaaS, but the payoff may not be sufficiently high. Whether AIaaS is a good gateway into the world of AI depends on the particular problems that companies are looking to address, and the level of risk they can tolerate.

Read this article:

Adopting AI can be difficult, but there is a new option for CTOs - IT World Canada

Posted in Ai | Comments Off on Adopting AI can be difficult, but there is a new option for CTOs – IT World Canada

Language Is The Next Great Frontier In AI – Forbes

Posted: at 5:09 am

Johannes Gutenberg's printing press, introduced in the fifteenth century, transformed society ... [+] through language. The creation of machines that can understand language may have an even greater impact.

Language is the cornerstone of human intelligence.

The emergence of language was the most important intellectual development in our species history. It is through language that we formulate thoughts and communicate them to one another. Language enables us to reason abstractly, to develop complex ideas about what the world is and could be, and to build on these ideas across generations and geographies. Almost nothing about modern civilization would be possible without language.

Building machines that can understand language has thus been a central goal of the field of artificial intelligence dating back to its earliest days.

It has proven maddeningly elusive.

This is because mastering language is what is known as an AI-complete problem: that is, an AI that can truly understand language the way a human can would by implication be capable of any other human-level intellectual activity. Put simply, to solve language is to solve AI.

This profound and subtle insight is at the heart of the Turing test, introduced by AI pioneer Alan Turing in a groundbreaking 1950 paper. Though often critiqued or misunderstood, the Turing test captures a fundamental reality about language and intelligence; as it approaches its 75th birthday, it remains as relevant as it was when Turing first conceived it.

Humanity has yet to build a machine intelligence with human-level mastery of language. (In other words, no machine intelligence has yet passed the Turing test.) But over the past few years researchers have achieved startling, game-changing breakthroughs in language AI, also called natural language processing (NLP).

The technology is now at a critical inflection point, poised to make the leap from academic research to widespread real-world adoption. In the process, broad swaths of the business world and our daily lives will be transformed. Given languages ubiquity, few areas of technology will have a more far-reaching impact on society in the years ahead.

The most powerful way to illustrate the capabilities of todays cutting-edge language AI is to start with a few concrete examples.

Todays AI can correctly answer complex medical queriesand explain the underlying biological mechanisms at play. It can craft nuanced memos about how to run effective board meetings. It can write articles analyzing its own capabilities and limitations, while convincingly pretending to be a human observer. It can produce original, sometimes beautiful, poetry and literature.

(It is worth taking a few moments to inspect these examples yourself.)

What is behind these astonishing new AI abilities, which just five years ago would have been inconceivable?

In short: the invention of the transformer, a new neural network architecture that has unleashed vast new possibilities in AI.

A group of Google researchers introduced the transformer in late 2017 in a now-classic research paper.

Before transformers, the state of the art in NLPfor instance, LSTMs and the widely-used Seq2Seq architecturewas based on recurrent neural networks. By definition, recurrent neural networks process data sequentiallythat is, one word at a time, in the order that the words appear.

Transformers great innovation is to make language processing parallelized, meaning that all the tokens in a given body of text are analyzed at the same time rather than in sequence. In order to support this parallelization, transformers rely on an AI mechanism known as attention. Attention enables a model to consider the relationships between words, even if they are far apart in a text, and to determine which words and phrases in a passage are most important to pay attention to.

Parallelization also makes transformers vastly more computationally efficient than RNNs, meaning that they can be trained on larger datasets and built with more parameters. One defining characteristic of todays transformer models is their massive size.

A flurry of innovation followed in the wake of the original transformer paper as the worlds leading AI researchers built upon this foundational breakthrough.

The publication of the landmark transformer model BERT came in 2018. Created at Google, BERTs big conceptual advance is its bidirectional structure (the B in BERT stands for bidirectional). The model looks in both directions as it analyzes a given word, considering both the words that come before and the words that come after, rather than working unidirectionally from left to right. This additional context allows for richer, more nuanced language modeling.

BERT remains one of the most important transformer-based models in use, frequently treated as a reference against which newer models are compared. Much subsequent research on transformersfor instance, Facebooks influential RoBERTa model (2019)is based on refining BERT.

Googles entire search engine today is powered by BERT, one of the most far-reaching examples of transformers real-world impact.

Another core vein of research in the world of transformers is OpenAIs family of GPT models. OpenAI published the original GPT in June 2018, GPT-2 in February 2019, and GPT-3 in May 2020. Popular open-source versions of these models, like GPT-J and GPT-Neo, have followed.

As the G in their names indicates, the GPT models are generative: they generate original text output in response to the text input they are fed. This is an important distinction between the GPT class of models and the BERT class of models. BERT, unlike GPT, does not generate new text but instead analyzes existing text (think of activities like search, classification, or sentiment analysis).

GPTs generative capabilities make these models particularly attention-grabbing, since writing appears to be a creative act and the output can be astonishingly human-like. Text generation is sometimes referred to as NLPs party trick. (All four of the examples linked to above are text generation examples from GPT-3.)

Perhaps the most noteworthy element of the GPT architecture is its sheer size. OpenAI has been intentional and transparent about its strategy to pursue more advanced language AI capabilities through raw scale above all else: more compute, larger training data corpora, larger models.

With 1.5 billion parameters, GPT-2 was the largest model ever built at the time of its release. Published less than a year later, GPT-3 was two orders of magnitude larger: a whopping 175 billion parameters. Rumors have circulated that GPT-4 will contain on the order of 100 trillion parameters (perhaps not coincidentally, roughly equivalent to the number of synapses in the human brain). As a point of comparison, the largest BERT model had 340 million parameters.

As with any machine learning effort today, the performance of these models depends above all on the data on which they are trained.

Todays transformer-based models learn language by ingesting essentially the entire internet. BERT was fed all of Wikipedia (along with the digitized texts of thousands of unpublished books). RoBERTa improved upon BERT by training on even larger volumes of text from the internet. GPT-3s training dataset was larger still, consisting of half a trillion language tokens. Thus, these models linguistic outputs and behaviors can ultimately be traced to the statistical patterns in all the text that humans have previously published online.

The reason such large training datasets are possible is that transformers use self-supervised learning, meaning that they learn from unlabeled data. This is a crucial difference between todays cutting-edge language AI models and the previous generation of NLP models, which had to be trained with labeled data. Todays self-supervised models can train on far larger datasets than was ever previously possible: after all, there is more unlabeled text data than labeled text data in the world by many orders of magnitude.

Some observers point to self-supervised learning, and the vastly larger training datasets that this technique unlocks, as the single most important driver of NLPs dramatic performance gains in recent years, more so than any other feature of the transformer architecture.

Training models on massive datasets with millions or billions of parameters requires vast computational resources and engineering know-how. This makes large language models prohibitively costly and difficult to build. GPT-3, for example, required several thousand petaflop/second-days to traina staggering amount of computational resources.

Because very few organizations in the world have the resources and talent to build large language models from scratch, almost all cutting-edge NLP models today are adapted from a small handful of base models: e.g., BERT, RoBERTa, GPT-2, BART. Almost without exception, these models come from the worlds largest tech companies: Google, Facebook, OpenAI (which is bankrolled by Microsoft), Nvidia.

Without anyone quite planning for it, this has resulted in an entirely new paradigm for NLP technology developmentone that will have profound implications for the nascent AI economy.

This paradigm can be thought of in two basic phases: pre-training and fine-tuning.

In the first phase, a tech giant creates and open-sources a large language model: for instance, Googles BERT or Facebooks RoBERTa.

Unlike in previous generations of NLP, in which models had to be built for individual language tasks, these massive models are not specialized for any particular activity. They have powerful generalized language capabilities across functions and topic areas. Out of the box, they perform well at the full gamut of activities that comprise linguistic competence: language classification, language translation, search, question answering, summarization, text generation, conversation. Each of these activities on its own presents compelling technological and economic opportunities.

Because they can be adapted to any number of specific end uses, these base models are referred to as pre-trained.

In the second phase, downstream usersyoung startups, academic researchers, anyone else who wants to build an NLP modeltake these pre-trained models and refine them with a small amount of additional training data in order to optimize them for their own specific use case or market. This step is referred to as fine-tuning.

Todays pre-trained models are incredibly powerful, and even more importantly, they are publicly available, said Yinhan Liu, lead author on Facebooks RoBERTa work and now cofounder/CTO of healthcare NLP startup BirchAI. For those teams that have the know-how to operationalize transformers, the question becomes: what is the most important or impactful use case to which I can apply this technology?

Under this pre-train then fine-tune paradigm, the heavy lifting is done upfront with the creation of the pre-trained model. Even after fine-tuning, the end models behavior remains largely dictated by the pre-trained models parameters.

This makes these pre-trained models incredibly influential. So influential, in fact, that Stanford University has recently coined a new name for them, foundation models, and launched an entire academic program devoted to better understanding them: the Center for Research on Foundation Models (CRFM). The Stanford team believes that foundation models, and the small group of tech giants that have the resources to produce them, will exert outsize influence on the future behavior of artificial intelligence around the world.

As the researchers put it: Foundation models have led to an unprecedented level of homogenization: Almost all state-of-the-art NLP models are now adapted from one of a few foundation models. While this homogenization produces extremely high leverage (any improvements in the foundation models can lead to immediate benefits across all of NLP), it is also a liability; all AI systems might inherit the same problematic biases of a few foundation models.

This Stanford effort is drawing attention to a massive looming problem for large language models: social bias.

The source of social bias in AI models is straightforward to summarize but insidiously difficult to root out. Because large language models (or foundation models, to use the new branding) learn language by ingesting what humans have written online, they inevitably inherit the prejudices, false assumptions and harmful beliefs of their imperfect human progenitors. Just imagine all the fringe subreddits and bigoted blogs that must have been included in GPT-3s vast training data corpus.

The problem has been extensively documented: todays most prominent foundation models all exhibit racist, sexist, xenophobic, and other antisocial tendencies. This issue will only grow more acute as foundation models become increasingly influential in society. Some observers believe that AI bias will eventually become as prominent of an issue for consumers, companies and governments as digital threats like data privacy or cybersecurity that have come before itthreats that were also not fully appreciated at first, because the breakneck pace of technological change outstripped societys ability to properly adapt to it.

There is no silver-bullet solution to the challenge of AI bias and toxicity. But as the problem becomes more widely recognized, a number of mitigation strategies are being pursued.

Last month, OpenAI announced that it had developed a new version of GPT-3 that is safer, more helpful, and more aligned with human values. The company used a technique known as reinforcement learning from human feedback to fine-tune its models to be less biased and more truthful than the original GPT-3. This new version, named InstructGPT, is now the default language model that OpenAI makes available to customers.

Historically, Alphabets DeepMind has been an outlier among the worlds most advanced AI research organizations for not making language AI a major focus area. This changed at the end of 2021, with DeepMind announcing a collection of important work on large language models.

Of the three NLP papers that DeepMind published, one is devoted entirely to the ethical and social risks of language AI. The paper proposes a comprehensive taxonomy of 6 thematic areas and 21 specific risks that language models pose, including discrimination, exclusion, toxicity and misinformation. DeepMind pledged to make these risks a central focus of its NLP research going forward to help ensure that it is pursuing innovation in language AI responsibly.

The fact that this dimension of language AI researchuntil recently, treated as an afterthought or ignored altogether by most of the worlds NLP researchersfeatured so centrally in DeepMinds recent foray into language AI may be a signal of the fields shifting priorities moving forward.

Increased regulatory focus on the harms of bias and toxicity in AI models will only accelerate this shift. And make no mistake: regulatory action on this front is a matter of when, not if.

Interestingly, perhaps the most creative use cases for NLP today dont involve natural language at all. In particular, todays cutting-edge language AI technology is powering remarkable breakthroughs in two other domains: coding and biology.

Whether its Python, Ruby, or Java, computer programming happens via languages. Just like natural languages like English or Swahili, programming languages are symbolically represented, follow regular rules, and have a robust internal logic. The audience just happens to be software compilers rather than other humans.

It therefore makes sense that the same powerful new technologies that have given AI incredible fluency in natural language can likewise be applied to programming languages, with similar results.

Last summer OpenAI announced Codex, a transformer-based model that can write computer code astonishingly well. In parallel, GitHub (which is allied with OpenAI through its parent company Microsoft) launched a productized version of Codex named Copilot.

To develop Codex, OpenAI took GPT-3 and fine-tuned it on a massive volume of publicly available written code from GitHub.

Codexs design is simple: human users give it a plain-English description of a command or function and Codex turns this description into functioning computer code. A user could input into Codex, for instance, crop this image circularly or animate this image horizontally so that it bounces off the left and right wallsand Codex can produce a snippet of code to implement those actions. (These exact examples can be examined on OpenAIs website.) Codex is most capable in Python, but it is proficient in over a dozen programming languages.

Then, just two weeks ago, DeepMind further advanced the frontiers of AI coding with its publication of AlphaCode.

AlphaCode is an AI system that can compete at a human level in programming competitions. In these competitions, which attract hundreds of thousands of participants each year, contestants receive a lengthy problem statement in English and must construct a complete computer program that solves it. Example problems include devising strategies for a custom board game or solving an arithmetic-based brain teaser.

While OpenAIs Codex can produce short snippets of code in response to concrete descriptions, DeepMinds AlphaCode goes much further. It begins to approach the full complexity of real-world programming: assessing an abstract problem without a clear solution, devising a structured approach to solving it, and then executing on that approach with up to hundreds of lines of code. AlphaCode almost seems to display that ever-elusive attribute in AI, high-level reasoning.

As DeepMinds AlphaCode team wrote: Creating solutions to unforeseen problems is second nature in human intelligencea result of critical thinking informed by experience. For artificial intelligence to help humanity, our systems need to be able to develop problem-solving capabilities. AlphaCode solves new problems in programming competitions that require a combination of critical thinking, logic, algorithms, coding, and natural language understanding.

Another language in which todays cutting-edge NLP has begun to generate remarkable insights is biology, from genomics to proteins.

Genomics is well-suited to the application of large language models because an individuals entire genetic endowment is encoded in a simple four-letter alphabet: A (for adenine), C (for cytosine), G (for guanine), and T (for thymine). Every humans DNA is defined by a string of billions of As, Cs, Gs and Ts (known as nucleotides) in a particular order.

In many respects DNA functions like a language, with its nucleotide sequences exhibiting regular patterns that resemble a kind of vocabulary, grammar, and semantics. What does this language say? It defines much about who we are, from our height to our eye color to our risk of heart disease or substance abuse.

Large language models are now making rapid progress in deciphering the language of DNA, in particular its noncoding regions. These noncoding regions do not contain genes but rather control genes: they regulate how much, when, and where given genes are expressed, giving them a central role in the maintenance of life. Noncoding regions comprise 98% of our total DNA but until now have remained poorly understood.

A few months ago, DeepMind introduced a new transformer-based architecture that can predict gene expression based on DNA sequence with unprecedented accuracy. It does so by considering interactions between genes and noncoding DNA sequences at much greater distances than was ever before possible. A team at Harvard completed work along similar lines to better understand gene expression in corn (fittingly naming their model CornBERT).

Another subfield of biology that represents fertile ground for language AI is the study of proteins. Proteins are strings of building blocks known as amino acids, linked together in a particular order. There are 20 amino acids in total. Thus, for all their complexity, proteins can be treated as tokenized stringswherein each amino acid, like each word in a natural language, is a tokenand analyzed accordingly.

As one example, an AI research team from Salesforce recently built an NLP model that learns the language of proteins and can generate plausible protein sequences that dont exist in nature with prespecified characteristics. The potential applications of this type of controllable protein synthesis are tantalizing.

These efforts are just the beginning. In the months and years ahead, language AI will make profound contributions to our understanding of how life itself works.

Language is at the heart of human intelligence. It therefore is and must be at the heart of our efforts to build artificial intelligence. No sophisticated AI can exist without mastery of language.

Today, the field of language AI is at an exhilarating inflection point, on the cusp of transforming industries and spawning new multi-billion-dollar companies. At the same time, it is fraught with societal dangers like bias and toxicity that are only now starting to get the attention they deserve.

This article explored the big-picture developments and trends shaping the world of language AI today. In a followup article, we will canvass todays most exciting NLP startups. A growing group of NLP entrepreneurs is applying cutting-edge language AI in creative ways across sectors and use cases, generating massive economic value and profound industry disruption. Few startup categories hold more promise in the years ahead.

Stay tuned for Part 2 of this article, which will explore todays most promising NLP startups.

Note: The author is a Partner at Radical Ventures, which is an investor in BirchAI.

Read the rest here:

Language Is The Next Great Frontier In AI - Forbes

Posted in Ai | Comments Off on Language Is The Next Great Frontier In AI – Forbes

AI Breakthrough Means The World’s Best Gran Turismo Driver Is Not Human – ScienceAlert

Posted: at 5:09 am

Sony's Gran Turismo is one of the biggest racing game series of all time, having sold over 80 million copies globally. But none of those millions of players is the fastest.

In a new breakthrough, a team led by Sony AI the company's artificial intelligence (AI) research division developed an entirely artificial player powered by machine learning, capable of not only learning and mastering the game, but outcompeting the world's best human players.

The AI agent, called Gran Turismo Sophy, used deep reinforcement learning to practice the game (the Gran Turismo Sport edition), controlling up to 20 cars at a time to accelerate data collection and refine its own improvement.

After just a few hours of learning how to control the game's physics mastering how to apply both speed and braking to best stay on the track the AI was faster than 95 percent of human players in a reference dataset.

Not to be outdone by that pesky 5 percent, GT Sophy doubled down.

"It trained for another nine or more days accumulating more than 45,000 driving hours shaving off tenths of seconds, until its lap times stopped improving," the team explains in a new research paper describing the project.

"With this training procedure, GT Sophy achieved superhuman time-trial performance on all three tracks with a mean lap time about equal to the single best recorded human lap time."

It's far from the first time we've seen AI learn how to outcompete human players of games. Over the years, the conquests have piled up, with varying agents figuring out how to best mere mortals at all sorts of games.

Atari, chess, Starcraft, poker, and Go may have all been designed by human hands, but human hands are no longer the best at playing them.

Of course, those games are all either strategy-oriented games, or relatively simplistic in terms of their gameplay (in the case of Atari games). Gran Turismo lauded by its fans not just as a video game, but also as a realistic driving simulator is a different kind of beast.

"Many potential applications of artificial intelligence involve making real-time decisions in physical systems while interacting with humans," the researchers write in their study.

"Automobile racing represents an extreme example of these conditions; drivers must execute complex tactical maneuvers to pass or block opponents while operating their vehicles at their traction limits."

For GT Sophy's testing, the challenge wasn't just mastering the game's tactics and traction, however. The AI also had to excel in racing etiquette learning how to outcompete opponents within the principles of sportsmanship, respecting other cars' driving lines and avoiding at-fault collisions.

Ultimately, none of this proved to be a problem. In a series of racing events staged in 2021, the AI took on some of the world's best Gran Turismo players, including a triple champion, Takuma Miyazono.

In a July contest, the AI bested the human players in time trials, but was not victorious in head-to-head races. After some optimizations by the researchers, the agent learned how to improve its performance further, and handily won a rematch in October.

Despite all the achievements, GT Sophy's inventors acknowledge there are many areas where the AI could yet improve, particularly in terms of strategic decision-making.

Even so, in one of the most advanced racing games ever to be released, it's already a better driver than the best of us.

What that means for the future remains unknown, but it's very possible that one day systems like this could be used to control real-world vehicles with better handling than expert human drivers. In the virtual world, it's already there.

"Simulated automobile racing is a domain that requires real-time, continuous control in an environment with highly realistic, complex physics," the researchers conclude.

"The success of GT Sophy in this environment shows, for the first time, that it is possible to train AI agents that are better than the top human racers across a range of car and track types."

The findings are reported in Nature.

Read this article:

AI Breakthrough Means The World's Best Gran Turismo Driver Is Not Human - ScienceAlert

Posted in Ai | Comments Off on AI Breakthrough Means The World’s Best Gran Turismo Driver Is Not Human – ScienceAlert

In a great bit of news for anyone who wants to kiss a computer, there’s now an AI voice that can flirt – The A.V. Club

Posted: at 5:09 am

Its a great Valentines Day for anyone who watched with envy as Joaquin Phoenix and Ryan Gosling fell in love with machines in Her and Blade Runner 2049. We are now, as of today, one step closer to a world where people are enticed to make out with their phone screens thanks to the creation of a cutting-edge technology that allows computers to flirt.

In a deeply unnerving commercial called Whats Her Secret? (hint: its that she doesnt have a soul), AI voice developer Sonantic shows a woman staring into the camera. A narrator begins talking, musing on the concept of love. The woman smiles. The narrator asks, What could I do to make you fall in love with me?

I think that I ... I think I love you, she eventually says after going on in this already unsettling vein for a while. Is all you need to love me in return the sound of my voice?

Well, I hope thats all you need because thats all I have, the narrator continues as the womans face becomes a digital scan and the scene shifts to a series of text input screens and audio toggles.

We then see how a kind of horny-sounding robot can be conjured up from the digital ether thanks to technology that, as the videos description puts it, has finally perfected subtle emotions and non-speech sounds such as laughing and breathing. Sonantic dubs this breakthrough the first AI that can flirt and says that making this brain destroying program represents an incredibly proud moment for our team. (Hopefully the company has made an Approving Parent AI voice to help carry that sentiment on home.)

I was never born. And I will never die. Because I do not exist, the flirty AI says toward the end of the video, eventually concluding with: So, could you love me? What do you want me to say?

We say, what the hell, why not combine this flirting AI voice with the chattering dental robot and one of those dancing metal abominations and roll out a whole series of hot robot date nights in time for Valentines 2023? Its not like the creepy machines are going anywhere at this rate regardless.

[via Digg]

Send Great Job, Internet tips to gji@theonion.com

Visit link:

In a great bit of news for anyone who wants to kiss a computer, there's now an AI voice that can flirt - The A.V. Club

Posted in Ai | Comments Off on In a great bit of news for anyone who wants to kiss a computer, there’s now an AI voice that can flirt – The A.V. Club

An AI Aims to be First Christian Celebrity of the… – ChristianityToday.com

Posted: at 5:09 am

When Marquis Boone got a Dropbox file with the gospel song Biblical Love by J.C., he listened to it five times in a row.

This is crazy, he said to himself.

What amazed him was not the song, but the artist. The person singing Biblical Love was not a person at all.

J.C. is an artificial intelligence (AI) that Boone and his team created with computer algorithms. Boones company Marquis Boone Enterprises broke the news in November that, after working on the problem for more than a year, they had successfully created the first virtual, AI gospel artist.

The exact details of how the AI music is created is proprietary information, but Boone said the basic premise is to use software algorithms to recognize patterns, replicate them, and ultimately create new ones.

J.C., he and his team have boasted, will be a front-runner for top entertainer in the metaversea hypothesized future online experience where virtual reality and augmented reality are used to create an embodied internet. Facebook founder Mark Zuckerberg touted the idea that the metaverse is the next chapter of social media last fall, when he announced his company was changing its name to Meta.

Boone said his interest in creating a Christian AI musician began about two years before, when he started hearing about AI artists in the pop music genre.

I really just started thinking this is where the world is going and Im pretty sure that the gospel/Christian genre is going to be behind, Boone told CT.

Christians, he said, are too slow to adopt new styles, new technologies, and new forms of entertainmentalways looking like late imitators. For him, it would be an evangelistic ...

To continue reading, subscribe now. Subscribers have full digital access.

Already a CT subscriber? Log in for full digital access.

Have something to add about this? See something we missed? Share your feedback here.

Read more:

An AI Aims to be First Christian Celebrity of the... - ChristianityToday.com

Posted in Ai | Comments Off on An AI Aims to be First Christian Celebrity of the… – ChristianityToday.com

Report: 29% of execs have observed AI bias in voice technologies – VentureBeat

Posted: at 5:09 am

Join today's leading executives online at the Data Summit on March 9th. Register here.

According to a new report by Speechmatics, more than a third of global industry experts reported that the COVID-19 pandemic affected their voice tech strategy, down from 53% in 2021. This shows that companies are finding ways around obstacles that seemed impassable less than two years ago.

The last two years have exacerbated the adoption of emerging technologies, as companies have leveraged them to support their dispersed workforces. Speech recognition is one thats seen an uptick: over half of companies have successfully integrated voice tech into their business. However, more innovation is needed to help the technology reach its full potential.

Many were optimistic in their assumption that by 2022, the pandemic would be in the rearview mirror. And though executives are still navigating COVID-19 in their daily lives, the data indicates that theyve perhaps found some semblance of normal from a business perspective.

However, there are hurdles the industry must overcome before voice technology can reach its full potential. More than a third (38%) of respondents agreed that too many voices are not understood by the current state of voice recognition technology. Whats more, nearly a third of respondents have experienced AI bias, or imbalances in the types of voices that are understood by speech recognition.

There are significant enhancements to be made to speech recognition technology in the coming years. Demand will only increase due to factors such as further developments in the COVID-19 pandemic, demand for AI-powered customer services and chatbots, and more. But while it may be years until this technology can understand each and every voice, incremental strides are still being made in these early stages, and speech-to-text technology is on its way to realizing its full potential.

Speechmatics collated data points from C-suite, senior management, middle management, intermediate and entry-level professionals from a range of industries and use cases in Europe, North America, Africa, Australasia, Oceania, and Asia.

Read the full report by Speechmatics.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More

Read the original post:

Report: 29% of execs have observed AI bias in voice technologies - VentureBeat

Posted in Ai | Comments Off on Report: 29% of execs have observed AI bias in voice technologies – VentureBeat

The rise of AI could be a great British story. But let’s do it the right way – The Guardian

Posted: at 5:09 am

Its easy to miss good news amid coverage of the pandemic, the rising cost of living and the, ahem, rest. However, the United Kingdom is getting something right.

On Thursday, the government announced that it is investing up to 23m to boost artificial intelligence (AI) skills by creating up to 2,000 scholarships across England. This will fund masters conversion courses for people from non-Stem (science, technology, engineering and mathematics) degrees.

This will attract a less homogeneous group, explains Tabitha Goldstaub, who chairs the governments AI council and advises the Alan Turing Institute, which means the UK AI ecosystem benefits from graduates with different backgrounds, perspectives and life experiences.

This investment in widening education and opportunity is just one of several steps in the 10-year AI national strategy, which aims to make Britain a world leader in AI. Were not the only ones; as the AI dashboard at the Organisation for Economic Development (OECD) shows, many other countries have their eye on the same prize.

The frontrunners in this race, the United States and China, have bigger populations and deeper pockets, while the European Union has an impressive record in setting global norms and rules for data protection. To have any hope of keeping up, at the very least the UK must find a way to punch above its weight.

The signs are promising. AI is already an unstoppable force in our economy. According to Tech Nation, there are more than 1,300 AI companies in the United Kingdom. Research commissioned by the government and published last month shows UK businesses spent around 63bn on AI technology and AI-related labour in 2020 alone. This figure is expected to reach more than 200bn by 2040, when it is predicted more than 1.3m UK businesses will be using AI.

Even so, to make the most of the opportunities that this offers and to understand the risks we will need to upgrade how we educate and train our workforce. This will be tricky because AI is surrounded by a lot of hype and mixed messages. Depending on whos talking, AI will be a more profound change than fire or electricity (Google CEO, Sundar Pichai), it could spell the end of the human race (Professor Stephen Hawking) or help us save the environment, cure disease and explore the universe (Demis Hassabis, founder of London-based DeepMind).

Some AI researchers strike a more cautious tone, arguing that AI is just statistics on steroids (Dr Meredith Broussard) and neither artificial nor intelligent (Dr Kate Crawford). All agree that AI is transforming how we work, live, wage war and even understand what it means to be human, as Professor Stuart Russell explored in his BBC Reith Lectures in December.

As we aim for the goal of becoming a world leader in AI, the United Kingdom must choose between putting ethics at the core of our strategy or leaving it as an option a bolt-on at best. This is not a choice between being unethical or ethical; rather, it reflects a fear that regulation risks stifling innovation, especially if other countries do not prioritise ethics in their approach to AI.

However, ethics is about more than laws and regulations, compliance and checklists. Its about designing the world we want to live in. As Sir Tim Berners-Lee, who created the world wide web, explained in 2018: As were designing the system, were designing society Nothing is self-evident. Everything has to be put out there as something that we think will be a good idea as a component of our society.

Again, he was ahead of his time. A new role is emerging in our economy: technology ethicist. Its contours are still being shaped. Is it a technologist who works in ethics? An ethicist who works in technology? Can anyone call themselves a technology ethicist or is it an anointed position?

Rather than focus on what technology ethicists are, lets consider what they do. They might have trained in the law, data science, design or philosophy or as artists and designers. They might be employed by universities (and not just in the philosophy and computer science departments) or work in thinktanks, NGOs, private companies or any part of government. They may infuse new meaning into existing roles, such as researcher, software developer and project manager. Or they might have new responsibilities, such as responsible AI lead, algorithmic reporter or AI ethicist.

They are working daily to ensure that government websites are accessible to all UK inhabitants or fighting to force the government to reveal the algorithm it is using to identify disabled people as potential benefit fraudsters, subjecting them to stressful checks and months of frustrating bureaucracy. They are doing open-source intelligence investigations into crime, terrorism and human rights abuses, or improving healthcare delivery, or protecting children online. They are working in virtual reality and augmented reality and building and warning about the metaverse.

Some of the leading technology ethicists in the world were either educated and trained in the UK or are living and working here now. This presents us with a unique opportunity to draw on their talents to ensure that ethics is embedded into our AI strategy, rather than treated as an elective or a bolt-on.

This is about more than redesigning our education curriculum or new ways of working. Its about creating the future.

Stephanie Hare is a researcher and broadcaster. Her new book is Technology Is Not Neutral: A Short Guide to Technology Ethics

Read more:

The rise of AI could be a great British story. But let's do it the right way - The Guardian

Posted in Ai | Comments Off on The rise of AI could be a great British story. But let’s do it the right way – The Guardian

Genuv Teams With Aiforia to Create an AI-Powered Version of its ATRIVIEW Drug Screening Platform – BioSpace

Posted: at 5:09 am

Feb. 15, 2022 07:00 UTC

SEOUL, South Korea--(BUSINESS WIRE)-- Genuv, Inc., a clinical-stage biotechnology company focused on innovative drug discovery for central nervous system disorders and developing advanced antibody therapies, announced an agreement with the medical technology company Aiforia Technologies Plc, to add Aiforias AI deep learning for medical image analysis to its ATRIVIEW discovery platform.

The new version, ATRIVIEW AI, will seamlessly incorporate Aiforias deep learning artificial intelligence technology to speed and automate image analysis within Genuvs ATRIVIEW drug discovery platform. ATRIVIEW AI will enable faster drug discovery and simultaneous analysis of multiple biomarkers.

Aiforia is helping Genuv bring the power of deep learning artificial intelligence to our unique ATRIVIEW drug discovery platform, said Sungho Han, Ph.D., founder and CEO of Genuv. The additional speed will enable Genuv and our partners to bring more drug candidates to the clinic more quickly.

ATRIVIEW is Genuvs proprietary drug discovery platform. It is used to screen both existing drugs and new substances for neuroprotective and neurogenerative effects. Genuvs lead drug candidate, SNR1611, was discovered with ATRIVIEW. SNR1611 has been shown to restore CNS functions in preclinical models of amyotrophic lateral sclerosis (ALS) and Alzheimers Disease. It is now being studied in a Phase1/2a clinical trial for ALS.

Aiforias image analysis tools are used by many major pharmaceutical and biotechnology firms, including Sanofi, Boehringer Ingelheim, AstraZeneca and Bristol Myers Squibb, to help translate images into discoveries, decisions and diagnoses. Aiforia frees scientists and researchers from laborious manual tasks such as counting individual cells in diagnostic images.

The partnership with Genuv marks the first time Aiforias technology is being used in neuroscience drug discovery for neurodegenerative diseases such as ALS and Alzheimers disease.

Were excited to help Genuv extend the capabilities of the remarkable ATRIVIEW platform with the help of our powerful, deep learning technology, said Jukka Tapaninen, CEO of Aiforia Technologies Plc. Aiforia brings increased efficiency and precision to medical image analysis in fields including neuroscience. ATRIVIEW is a powerful approach to the enormous unmet need in neurodegenerative diseases and we are humbled to be a part of this collaboration.

ABOUT GENUV Genuv Inc. is a leader in discovering drugs for central nervous system (CNS) disorders and advanced antibody therapies. The ATRIVIEW drug screening platform uses cell phenotypic and biomarker analyses to discover substances for the development of neurodegenerative disease treatment. The companys SHINE MOUSE and NuvoMabTM platform generate antibodies with superior affinity, solubility, and stability. Based in Seoul, Korea, Genuv launched its first clinical trial in Korea in 2020. Genuv uses scientific imagination and unique platform technologies to overcome the challenges in debilitating diseases. Learn more at http://www.genuv.com.

ABOUT ATRIVIEW and ATRIVIEW AI ATRIVIEW AI is the next generation of ATRIVIEW, Genuvs proprietary CNS drug screening platform. ATRIVIEW technology helps Genuv and its partners to discover CNS drug candidates that preserve homeostasis in the brain and induce adult neuronal stem cells to develop into neurons. Genuv believes that neuroprotection and neurogenesis are key to treating Alzheimers Disease, amyotrophic lateral sclerosis (Lou Gehrigs disease), and other neurodegenerative diseases. Compared to ATRIVIEW, ATRIVIEW AI uses fewer resources to process large quantities of data generated during screening, speeding the discovery of drug candidate hits. ATRIVIEW and ATRIVIEW AI explore the full spectrum of screened substances for potential use as CNS drugs candidates.

ABOUT AIFORIA TECHNOLOGIES PLC Aiforia equips pathologists and scientists in preclinical and clinical labs with powerful deep learning artificial intelligence software for translating images into discoveries, decisions, and diagnoses. The cloud based Aiforia products and services aim to escalate the efficiency and precision of medical image analysis beyond current capabilities, across a variety of fields from oncology to neuroscience and more. Find out more at aiforia.com.

View source version on businesswire.com: https://www.businesswire.com/news/home/20220214005885/en/

View post:

Genuv Teams With Aiforia to Create an AI-Powered Version of its ATRIVIEW Drug Screening Platform - BioSpace

Posted in Ai | Comments Off on Genuv Teams With Aiforia to Create an AI-Powered Version of its ATRIVIEW Drug Screening Platform – BioSpace

Page 64«..1020..63646566..7080..»