Page 82«..1020..81828384..90100..»

Category Archives: Ai

This AI art app is a glimpse at the future of synthetic media – The Verge

Posted: December 10, 2021 at 6:50 pm

If youve been hanging out on Twitter lately, then youve probably noticed a profusion of AI-generated images sprouting all over your timeline like weird, algorithmic visions. These pictures have been generated using a new app called Dream, which lets anyone create AI-powered paintings by simply typing a brief description of what they want to see. Its odd, often uncanny stuff and extremely fun.

The resulting artwork has its own particular aesthetic, defined by swirling shapes and incoherent objects. The real magic, though, is that no matter what you type, the app will generate something that is visually compelling (at least until we get too used to these toys) and that matches your prompt in often surprisingly apposite ways.

Consider, for example, the image below: Galactic Archaeology With Metal-Poor Stars. Not only has the app created a picture that captures the mind-boggling galactic scale of a nebula, but the star-like highlights dotted around the space are mostly blue a tint that is scientifically accurate for metal-poor stars (as metallicity affects their color).

A few quick searches on Twitter reveal plenty more examples, but really, you should have a play with the app yourself to understand it better. (If nothing else, the images it generates are exactly the right size to create a personalized wallpaper for your phone.)

This sort of AI-generated artwork is not new, but it is becoming higher quality and more accessible. Past examples of these sorts of text-to-image models have included research-orientated programs like DALL-E and VQGAN+CLIP, as well as more specialized commercial projects like Artbreeder (which is particularly good at creating portraits of fictional beings and people). With tools such as these, the AI art scene has exploded in recent years, with practitioners creating everything from lifelike Roman emperors to infinite waifus.

The Dream app takes things a step further with its speed, quality, and accessibility. Its available on iOS, Android, and the web and is the work of a Canadian startup named Wombo. The company previously made that AI-powered app that lets you feed in static images to create lip-synced renditions of memeable songs. What exactly powers Dream isnt clear (weve contacted Wombo to find out), but a lot of AI art tech is open-source, which means the firm has likely built on past work to create the app.

Generally, programs like these are trained on vision datasets huge libraries of images that are tagged based on objects and scenery. The programs pick out consistent patterns and themes in these images and then use this information to try and generate something that matches the users prompt. We dont know what dataset Dreams algorithms were trained on, but based on its output, its safe to say it includes a wide range of imagery able to generate pictures that correspond to anime characters and video games.

The accessibility of Dream means its being put to novel uses, too. Its been used for viral games (like inputting your PhD thesis title and sharing the result) and for more directed projects as well. In one amazing Twitter thread, the writer and illustrator Ursula Vernon (who publishes under the name T. Kingfisher) shared a short comic theyd made using Dream. The comics characters are drawn by hand, but the backgrounds are AI-generated, with the surreal, shifting quality of the images explained as a result of the setting: a dream library overseen by the Egyptian god of writing, Thoth.

Vernon tweeted about her experience, noting that she had to do a not-insignificant amount of work to prepare the images and that the inability of the program to create scenery from within a space with consistent architecture created its own challenges.

In Conclusiondoes it work visually? I think the answer is sort of, tweeted Vernon. Im very aware of the weirdnesses as an artist, obviously. As a dream sequence, the messed up architecture kinda works, but how long can you get away with it? Sooner or later, the reader is probably gonna notice that nothing takes place in the same scene from a different angle.

Despite its obvious limitations, Dream shows us a glimpse of the future of synthetic or AI-generated media. For evangelists in this space, the promise of the technology is one of infinite variety. In the future, they say, games, comics, films, and books will all be generated on the fly in response to our every prompt and whim. And although were a long, long way from such media matching the quality of human output, limited, hybrid applications will be coming sooner than you think appearing like something first glimpsed in a dream.

Read more:

This AI art app is a glimpse at the future of synthetic media - The Verge

Posted in Ai | Comments Off on This AI art app is a glimpse at the future of synthetic media – The Verge

Why we need a new agency to regulate advanced artificial intelligence: Lessons on AI control from the Facebook Files – Brookings Institution

Posted: at 6:50 pm

With the development of ever more advanced artificial intelligence (AI) systems, some of the worlds leading scientists, AI engineers and businesspeople have expressed concerns that humanity may lose control over its creations, giving rise to what has come to be called the AI Control Problem. The underlying premise is that our human intelligence may be outmatched by artificial intelligence at some point and that we may not be able to maintain meaningful control over them. If we fail to do so, they may act contrary to human interests, with consequences that become increasingly severe as the sophistication of AI systems rises. Indeed, recent revelations in the so-called Facebook Files provide a range of examples of one of the most advanced AI systems on our planet acting in opposition to our societys interests.

In this article, I lay out what we can learn about the AI Control Problem using the lessons learned from the Facebook Files. I observe that the challenges we are facing can be distinguished into two categories: the technical problem of direct control of AI, i.e. of ensuring that an advanced AI system does what the company operating it wants it to do, and the governance problem of social control of AI, i.e. of ensuring that the objectives that companies program into advanced AI systems are consistent with societys objectives. I analyze the scope for our existing regulatory system to address the problem of social control in the context of Facebook but observe that it suffers from two shortcomings. First, it leaves regulatory gaps; second, it focuses excessively on after-the-fact solutions. To pursue a broader and more pre-emptive approach, I argue the case for a new regulatory bodyan AI Control Councilthat has the power to both dedicate resources to conduct research on the direct AI control problem and to address the social AI control problem by proactively overseeing, auditing, and regulating advanced AI systems.

A fundamental insight from control theory1 is that if you are not careful about specifying your objectives in their full breadth, you risk generating unintended side effects. For example, if you optimize just on a single objective, it comes at the expense of all the other objectives that you may care about. The general principle has been known for eons. It is reflected for example in the legend of King Midas, who was granted a wish by a Greek god and, in his greed, specified a single objective: that everything he touched turn into gold. He realized too late that he had failed to specify the objectives that he cared about in their full breadth when his food and his daughter turned into gold upon his touch.

The same principle applies to advanced AI systems that pursue the objectives that we program into them. And as we let our AI systems determine a growing range of decisions and actions and as they become more and more effective at optimizing their objectives, the risk and magnitude of potential side effects grow.

The revelations from the Facebook Files are a case in point: Facebook, which recently changed its name to Meta, operates two of the worlds largest social networks, the eponymous Facebook as well as Instagram. The company employs an advanced AI systema Deep Learning Recommendation Model (DLRM)to decide which posts to present in the news feeds of Facebook and Instagram. This recommendation model aims to predict which posts a user is most likely to engage with, based on thousands of data points that the company has collected about each of its billions of individual users and trillions of posts.

Facebooks AI system is very effective in maximizing user engagement, but at the expense of other objectives that our society values. As revealed by whistleblower Frances Haugen via a series of articles in the Wall Street Journal in September 2021, the company repeatedly prioritized user engagement over everything else. For example, according to Haugen, the company knew from internal research that the use of Instagram was associated with serious increases in mental health problems related to body image among female teenagers but did not adequately address them. The company attempted to boost meaningful social interaction on its platform in 2018 but instead exacerbated the promotion of outrage, which contributed to the rise of echo chambers that risk undermining the health of our democracy. Many of the platforms problems are even starker outside of the U.S., where drug cartels and human traffickers employed Facebook to do their business, and Facebooks attempts to thwart them were insufficient. These examples illustrate how detrimental it can be to our society when we program an advanced AI system that affects many different areas of our lives to pursue a single objective at the expense of all others.

The Facebook Files are also instructive for another reason: They demonstrate the growing difficulty of exerting control over advanced AI systems. Facebooks recommendation model is powered by an artificial neural network with some 12 trillion parameters, which currently makes it the largest artificial neural network in the world. The system accomplishes the job of predicting which posts a user is most likely to engage with better than a team of human experts ever could. It therefore joins a growing list of AI systems that can accomplish tasks that were previously reserved for humans at super-human levels. Some researchers refer to such systems as domain-specific, or narrow, superintelligences, i.e. AI systems that outperform humans within a narrow domain of application. Humans still lead when it comes to general intelligencethe ability to solve a wide range of problems in many different domains. However, the club of narrow superintelligences has been growing rapidly in recent years. It includes AlphaGo and AlphaFold, creations of Google subsidiary DeepMind that can play Go and predict how proteins fold at super-human levels, as well as speech recognition and image classification systems that can perform their tasks better than humans. As these systems acquire super-human capabilities, their complexity makes it increasingly difficult for humans to understand how they arrive at solutions. As a result, an AIs creator may lose control of the AIs output.

There are two dimensions of AI control that are useful to distinguish because they call for different solutions: The direct control problem captures the difficulty of the company or entity operating an AI system to exert sufficient control, i.e. to make sure the system does what the operator wants it to do. The social control problem reflects the difficulty of ensuring that an AI system acts in accordance with social norms.

Direct AI control is a technical challenge that companies operating advanced AI systems face. All the big tech companies have experienced failures of direct control over their AI systemsfor example, Amazon employed a resume-screening system that was biased against women; Google developed a photo categorization system that labeled black men as gorillas; Microsoft operated a chatbot that quickly began to post inflammatory and offensive tweets. At Facebook, Mark Zuckerberg launched a campaign to promote COVID-19 vaccines in March 2021, but one of the articles in the Facebook Files documents that Facebook instead turned into a source of rampant misinformation, concluding that [e]ven when he set a goal, the chief executive couldnt steer the platform as he wanted.

One of the fundamental problems of advanced AI systems is that the underlying algorithms are, at some level, black boxes. Their complexity makes them opaque and makes their workings difficult to fully understand for humans. Although there have been some advances in making deep neural networks explainable, these are innately limited by the architecture of such networks. For example, with sufficient effort, it is possible to explain how one particular decision was made (called local interpretability), but it is impossible to foresee all possible decisions and their implications. This exacerbates the difficulty of controlling what our AI systems do.

Frequently, we only detect AI control problems after they have occurredas was the case in all the examples from big tech discussed above. However, this is a risky path with potentially catastrophic outcomes. As AI systems acquire greater capabilities and we delegate more decisions to them, relying on after-the-fact course corrections exposes our society to large potential costs. For example, if a social networking site contributes to encouraging riots and deaths, a course correction cannot undo the loss of life. The problem is of even greater relevance in AI systems for military use. This creates an urgent case for proactive work on the direct control problem and public policy measures to support and mandate such work, which I will discuss shortly below.

In contrast to the technical challenge of the direct control problem, the social AI control problem is a governance challenge. It is about ensuring that AI systemsincluding those that do precisely what their operators want them to doare not imposing externalities on the rest of society. Most of the problems identified in the Facebook Files are examples of this, as Zuckerberg seems to have prioritized user engagementand by extension the profits and market share of his companyover the common good.

The problem of social control of AI systems that are operated by corporations is exacerbated by market forces. It is frequently observed that unfettered market forces may provide corporations with incentives to pursue a singular objective, profit maximization, at the expense of all other objectives that humanity may care about. As we already discussed in the context of AI systems, pursuing a single objective in a multi-faceted world is bound to lead to harmful side effects on some or all members of society. Our society has created a rich set of norms and regulations in which markets are embedded so that we can reap the benefits of market forces while curtailing their downsides.

Advanced AI systems have led to a shift in the balance of power between corporations and societythey have given corporations the ability to pursue single-minded objectives like user engagement in hyper-efficient ways that used to be impossible before such technologies were available. The resulting potential harms for society are therefore larger and call for more proactive and targeted regulatory solutions.

Throughout our history, whenever we developed new technologies that posed new hazards for society, our nation has made it a habit to establish new regulatory bodies and independent agencies endowed with world-class expertise to oversee and investigate the new technologies. For example, the National Transportation Safety Board (NTSB) and the Federal Aviation Administration (FAA) were established at the onset of the age of aviation; or the Nuclear Regulatory Commission (NRC) was established at the onset of the nuclear age. By many measures, advanced artificial intelligence has the potential to be an even more powerful technology that may impose new types of hazards on society, as exemplified by the Facebook Files.

Given the rise of artificial intelligence, it is now time to establish a federal agency to oversee advanced artificial intelligencean AI Control Council that is explicitly designed to address the AI Control Problem, i.e. to ensure that the ever more powerful AI systems we are creating act in societys interest. To be effective in meeting this objective, such a council would need to have the ability to (i) pursue solutions to the direct AI control problem and (ii) to oversee and when necessary regulate the way AI is used across the U.S. economy to address the social control problem, all while ensuring that it does not handicap advances in AI. (See also here for a complementary proposal by Ryan Calo for a federal agency to oversee advances in robotics.) In what follows I first propose the role and duties of an AI Control Council and then discuss some of the tradeoffs and design issues inherent in the creation of a new federal agency.

First, there are many difficult technical questions related to direct AI controland even some philosophical questionsthat require significant fundamental research. Such work has broad public benefits but is hampered by the fact that the most powerful computing infrastructure, the most advanced AI systems, and increasingly the vast majority of AI researchers are located within private corporations which do not have sufficient incentive to invest in broader public goods. The AI Control Council should have the ability to direct resources to addressing these questions. Since the U.S. is one of the leading AI superpowers, this would have the potential to steer the direction of AI advancement in a more desirable direction at a worldwide level.

Second, to be truly effective, the council would need to have a range of powers to oversee AI development by private and public actors to meet the challenge of social control of AI:

Since talent shortages in the AI sector are severe, the Council needs to be designed with an eye towards making it attractive for the worlds top experts on AI and AI control to join. Many of the leading experts on AI recognize the high stakes involved in AI control. If the design of the Council carries the promise to make progress in addressing the AI control problem, highly talented individuals may be eager to serve and contribute to meeting one of the greatest technological challenges of our time.

One of the questions that the Council will need to address is how to ensure that its actions steer advances in AI in a desirable direction without holding back technological progress and U.S. leadership in the field. The Councils work on the direct control problem as well as the lessons learned from impact assessments will benefit AI advancement broadly because they will allow private sector actors to build on the findings of the Council and of other AI researchers. Moreover, if well-designed, even the oversight and regulation required to address the social control problem can in fact spur technological progress by providing certainty about the regulatory environment and by forestalling a race to the bottom by competing companies.

Another important question in designing the Council is resolution of domain issues when AI systems are deployed in areas that are already regulated by an existing agency. In that case, it would be most useful for the Council to play an advisory role and assist with expertise as needed. For example, car accidents produced by autonomous vehicles would fall squarely into the domain of the National Highway Traffic Safety Administration (NHTSA), but the new AI Control Council could assist with its expertise on advanced AI.

By contrast, when an advanced AI system gives rise to (i) effects in a new domain or (ii) emergent effects that cut across domains covered by individual agencies, then it would fall within the powers of the AI Control Council to intervene. For example, the mental health effects of the recommendation models of social networks would be a new domain that is not covered by existing regulations and that calls for impact assessments, transparency, and potentially for regulation. Conversely, if for example a social network targets stockbrokers with downbeat content to affect their mood and by extension stock markets to benefit financially in a way that is not covered by existing regulations on market manipulation, it would be a cross-domain case that the council should investigate alongside the Securities and Exchange Commission (SEC).

From a longer-term perspective, the problems revealed in the Facebook Files are only the beginning of humanitys struggle to control our ever more advanced AI systems. As the amount of computing power available to the leading AI systems and the human and financial resources invested in AI development grow exponentially, the capabilities of AI systems are rising alongside. If we cannot successfully address the AI control problems we face now, how can we hope to do so in the future when the powers of our AI systems have advanced by another order of magnitude? Creating the right institutions to address the AI control problem is therefore one of the most urgent challenges of our time. We need a carefully crafted federal AI Control Council to meet the challenge.

The Brookings Institution is financed through the support of a diverse array of foundations, corporations, governments, individuals, as well as an endowment. A list of donors can be found in our annual reports published onlinehere. The findings, interpretations, and conclusions in this report are solely those of its author(s) and are not influenced by any donation.

Go here to see the original:

Why we need a new agency to regulate advanced artificial intelligence: Lessons on AI control from the Facebook Files - Brookings Institution

Posted in Ai | Comments Off on Why we need a new agency to regulate advanced artificial intelligence: Lessons on AI control from the Facebook Files – Brookings Institution

For truly ethical AI, its research must be independent from big tech – The Guardian

Posted: at 6:50 pm

A year ago I found out, from one of my direct reports, that I had apparently resigned. I had just been fired from Google in one of the most disrespectful ways I could imagine.

Thanks to organizing done by former and current Google employees and many others, Google did not succeed in smearing my work or reputation, although they tried. My firing made headlines because of the worker organizing that has been building up in the tech world, often due to the labor of people who are already marginalized, many of whose names we do not know. Since I was fired last December, there have been many developments in tech worker organizing and whistleblowing. The most publicized of these was Frances Haugens testimony in Congress; echoing what Sophie Zhang, a data scientist fired from Facebook, had previously said, Haugen argued that the company prioritizes growth over all else, even when it knows the deadly consequences of doing so.

Ive seen this happen firsthand. On 3 November 2020, a war broke out in Ethiopia, the country I was born and raised in. The immediate effects of unchecked misinformation, hate speech and alternative facts on social media have been devastating. On 30 October of this year, I and many others reported a clear genocidal call in Amharic to Facebook. The company responded by saying that the post did not violate its policies. Only after many reporters asked the company why this clear call to genocide didnt violate Facebooks policies and only after the post had already been shared, liked and commented on by many did the company remove it.

Other platforms like YouTube have not received the scrutiny they warrant, despite studies and articles showing examples of how they are used by various groups, including regimes, to harass citizens. Twitter and especially TikTok, Telegram and Clubhouse have the same issues but are discussed much less. When I wrote a paper outlining the harms posed by models trained using data from these platforms, I was fired by Google.

When people ask what regulations need to be in place to safeguard us from the unsafe uses of AI weve been seeing, I always start with labor protections and antitrust measures. I can tell that some people find that answer disappointing perhaps because they expect me to mention regulations specific to the technology itself. While those are important, the #1 thing that would safeguard us from unsafe uses of AI is curbing the power of the companies who develop it and increasing the power of those who speak up against the harms of AI and these companies practices. Thanks to the hard work of Ifeoma Ozoma and her collaborators, California recently passed the Silenced No More Act, making it illegal to silence workers from speaking out about racism, harassment and other forms of abuse in the workplace. This needs to be universal. In addition, we need much stronger punishment of companies that break already existing laws, such as the aggressive union busting by Amazon. When workers have power, it creates a layer of checks and balances on the tech billionaires whose whim-driven decisions increasingly affect the entire world.

I see this monopoly outside big tech as well. I recently launched an AI research institute that hopes to operate under incentives that are different from those of big tech companies and the elite academic institutions that feed them. During this endeavor, I noticed that the same big tech leaders who push out people like me are also the leaders who control big philanthropy and the governments agenda for the future of AI research. If I speak up and antagonize a potential funder, it is not only my job on the line, but the jobs of others at the institute. And although there are some albeit inadequate laws that attempt to protect worker organizing, there is no such thing in the fundraising world.

So what is the way forward? In order to truly have checks and balances, we should not have the same people setting the agendas of big tech, research, government and the non-profit sector. We need alternatives. We need governments around the world to invest in communities building technology that genuinely benefits them, rather than pursuing an agenda that is set by big tech or the military. Contrary to big tech executives cold-war style rhetoric about an arms race, what truly stifles innovation is the current arrangement where a few people build harmful technology and others constantly work to prevent harm, unable to find the time, space or resources to implement their own vision of the future.

We need an independent source of government funding to nourish independent AI research institutes that can be alternatives to the hugely concentrated power of a few large tech companies and the elite universities closely intertwined with them. Only when we change the incentive structure will we see technology that prioritizes the wellbeing of citizens rather than a continued race to figure out how to kill more people more efficiently, or make the most amount of money for a handful of corporations around the world.

See the original post:

For truly ethical AI, its research must be independent from big tech - The Guardian

Posted in Ai | Comments Off on For truly ethical AI, its research must be independent from big tech – The Guardian

A unified, agnostic software environment spanning HPC and AI won’t be achieved – The Register

Posted: at 6:50 pm

Register debate It is entirely human and in some senses desirable to eliminate as much of the complexity out of the IT stack as makes sense. And it is also entirely human to be accepting of the need for choice and to acknowledge the necessary frustration of complexity.

There is a tension between these two, as many Reg readers pointed out in our latest debate Can a unified, agnostic software environment that spans both HPC and AI applications be achieved?

Readers were a little bit more sceptical about this than they were optimistic, and the naysayers edged out the yeasayers, with 53 per cent of the vote against and 47 per cent of the vote for the motion.

JavaScript Disabled Please Enable JavaScript to use this feature.

The debate opened with Nicole Hemsoth, co-editor of our sister publication, The Next Platform, arguing for the motion and stipulating that it was perhaps time to have the smallest number of tools supporting the widest array of high-performance and high-scale applications at HPC and AI centres.

"Creating a unified HPC and AI software stack that is both open and agnostic sounds like common sense to us, and the reason it sounds like that is because it is," Hemsoth argued.

"What is preventing us from bringing all minds to bear on solving problems instead of endlessly untangling a matrix of tools and code? Egos and near-religious adherence to preferred platforms, the not-invented-here syndrome, and lack of cooperation is the root of this particular evil and the outcome is this vicious cycle reinventing the wheel. Over and over."

After making this logical and hopeful argument for the convergence of the HPC and AI development tools and runtimes, Hemsoth conceded that having one stack is probably not going to happen, even if does make sense in the abstract, and that the best that we can hope for is a bunch of different stacks that can generate their own native code but also be converted to other platforms, much as AMD's ROCm platform can run CUDA code or kick out CUDA code that allows it to run on Nvidia GPU accelerators. Maybe Nvidia will return the favour in kind, and so might Intel with its oneAPI effort?

Then Rob Farber, who has worked at Los Alamos National Laboratory, Lawrence Berkeley National Laboratory, and Pacific Northwest National Laboratory in his long career, and is now chief executive officer at TechEnablement, blew our minds a little bit with an intricate and technical argument, espousing the idea that a unified, agnostic software environment is an admirable goal, but difficult to achieve at the source code level because no one and no single machine architecture current or yet to be designed can be left out.

Interestingly, Farber suggests that the key insight is that any unification might not happen at the source code level, but within a compute graph that is generated in compilers, such as those based on LLVM, that constitutes a data structure generated by the compiler, regardless of the source language, that tells data how to flow and be crunched by the hardware.

"These graphs constitute the 'software environment' that can leverage all the hardware density and parallelism that modern semiconductor manufacturing can pack on a chip," Farber explained. "Performance leverages the decades of work by compiler writers to optimize their compute graphs to maximize use of the hardware compute capabilities and minimize performance limiting external memory accesses. Parallelism can be achieved by pipelining work through the compute graph and instantiating multiple compute graphs to process data in parallel."

Dan Olds, chief research officer at Intersect360, argued pretty vehemently against the motion.

"There is no way in hell this will happen," Olds argued. "Why? Because this is a world of human beings who are working in the interests of themselves and their organizations. APIs are sources of competitive advantage for many companies and, as such, not something that those suppliers should want to completely standardize particularly when that standard is being driven by the largest and most influential supplier in the industry."

We would add that it will be tough to get agreement when there are three major suppliers of compute engines in the data centre Intel, AMD, and Nvidia and agree that self-serving standards will not survive in the long run, as Olds pointed out. But the long run can be a very long time. Like decades.

We finished off the debate with me, the other co-editor at The Next Platform, arguing that a single unified HPC and AI development and runtime environment might be less desirable than we might think at first blush.

"In the simulation and modeling and machine learning sectors of the broader high performance computing sector, perhaps one day there will be a unified field, like quantum mechanics and relativity, and perhaps there will be a single programming environment that can span it all," I said.

"But for now, in a post-Moore's Law world where every transistor counts, every bit moved and processed counts, and every joule of energy counts, there is no room for any inefficiency in the hardware and software stack. And that means there is going to be complexity in the programming stack. It is an unavoidable trade-off between application performance and application portability, which we have seen play out over more than five decades of commercial and HPC computing."

History has shown that it is far easier to get standards for the plumbing in the hardware stack interconnects and protocols and such than it is to get them higher up in the programming environments. And without knowing what we were all writing, I agreed with my partner at The Next Platform that maybe the best that we could hope for was a level of emulation or conversion like that which is happening between AMD's ROCm and Nvidia's CUDA.

But in the end, as we face a post-Moore's Law world where it keeps getting harder to get more work done in the same thermals and in the same budget (software is getting more complex faster than hardware can keep up, which is why it costs $500m to build an exascale supercomputer instead of the $50m it took to build a terascale one several decades ago), every single piece of code in both the HPC and AI stacks is going to have to be highly tuned to drive efficiency up and thermals and costs down, and that means having a much broader diversity of hardware and consequently more compilers, more frameworks, more libraries.

One of the many Anonymous Cowards summed up many of the comments that came in for this debate thus:

"It's a nice idea - write things once and run them anywhere.

Trouble is:

- The vendors need their lock-in. AWS don't want customers to migrate to Google, neither want them to migrate to Azure. There's not many places to go after that.

You don't want to build your exciting new product with its competitive features at the behest of a competitor, let alone some other self-assigned arbiter. It would have an incredible chilling effect.

It's a nice pipe dream, but won't work in reality.

That's not to say it's impossible in a limited extent - HTML was universal enough to give us the World Wide Web, for example."

And reader ScottTx seconded this idea about vendor lock-in being the real barrier:

"Vendor lock-in. Exactly right. None of the stated reasons why this hasn't happened yet are valid. The single obstacle to this sky-pie is that it won't make anybody any money. There would be no incentive to spend the time and resources to build and maintain such a system."

And that is the real rub as far as all four of us are concerned, and many of you.

Continue reading here:

A unified, agnostic software environment spanning HPC and AI won't be achieved - The Register

Posted in Ai | Comments Off on A unified, agnostic software environment spanning HPC and AI won’t be achieved – The Register

Passing the Turing Test: AI creates human-like text – Big Think

Posted: at 6:50 pm

In September, I wrote an article that began like this:

The baseball legend Yogi Berra once had a manager tell him to think more when he was up at bat. Berra responded, How can a guy hit and think at the same time? It was a fair question. After all, when a pitcher throws a fastball, the batter has about 400 milliseconds to see the pitch, judge its direction, and swing the bat.

The human eye takes about 80 milliseconds to react to a stimulus. Thats why Berra was asked to think more, it was thought that his thoughts were taking too long to hit the ball. But Berra was right; thinking less sometimes helps us make decisions.

But the truth is that I actually only wrote the first paragraph. Every word in the second paragraph was generated almost instantly by the AI writing tool Sudowrite, which used only the content of the first paragraph for context. With tools like Sudowrite, you can generate well-structured, human-like, and often coherent (or at least, semi-coherent) writing simply by feeding it a few words or sentences.

These so-called natural language processing (NLP) tools have grown increasingly sophisticated over the past decade. Today, it is possible to use NLP tools to generate essays, emails, fictional stories, and much more. As these technologies continue to evolve, they may soon create a world where the bulk of written human communication from casual emails to journalistic writing is generated, or at least augmented, by AI.

In November, the artificial intelligence company OpenAI significantly expanded public access to GPT-3, one of the worlds most advanced NLP models, and the one on which Sudowrite is based. Now, anyone with an internet connection can use GPT-3 to do things like carry on reasonably human-like conversations with a chatbot, build AI-based customer support systems, create a spreadsheet that fills itself out, and translate dense writing into text a second-grader could understand.In some cases, GPT-3 can perform complex tasks that it was never specifically trained to perform.

How does it work? In basic terms, GPT-3 which stands for Generative Pre-trained Transformer 3 is an AI that takes a string of text and aims to predict which word should (or is most likely to) come next. To obtain that ability, OpenAI had GPT-3 look at billions of words across the internet, within news articles, forum posts, websites, etc.

The AI learns that some words are more likely to follow a given word than others. Over time, the model fine-tunes itself by tweaking its parameters, which are essentially the parts that learn as the model consumes data, somewhat similar to synapses in the human brain. GPT-3 features about 175 billion trainable parameters.

As the third model version of the tool, GPT-3 has far more parameters than its predecessors: GPT-1 had 117 million while GPT-2 had 1.5 billion. In general, the more parameters in a model, the more sophisticated the model. But while scaling up parameters is effective and relatively straightforward, it is not cheap: Training GPT-3 on vast amounts of unsupervised data cost an estimated $12 million.

The scaled-up GPT-3 can perform some pretty impressive feats. I experimented with asking the chatbot to generate stories, nearly all of which started in a castle with a king, prince, or princess, unless I instructed the AI to do otherwise. Heres an example:

Human: Tell me a story that does not involve kingdoms or princes or wizards.

AI: Well, the caterpillar sat on the edge of a toadstool one day and thought about all the adventures he could go on. After a while, he decided that he wanted to eat mushrooms. He soon became friends with an AI who lived in the realm of imagination. She provided the caterpillar with lots of delicious stories to read. The caterpillar soon became full of ideas and wanted to create his own reality. So the caterpillar sent out a plea for other caterpillars to help him discover the true meaning of life.

Human: Continue the story and include a part where the caterpillar learns to fly.

AI: Well, the caterpillar used his mind to create a key that was capable of opening a wormhole in space and the caterpillar went on an adventure through the wormhole. When he arrived on the other side, he looked like a butterfly. All the caterpillars who saw him thought that he was awesome and soon they knew that they too could fly. They all wanted to be like the butterfly caterpillar and they all became butterflies just as wise as the butterfly caterpillar.

Human: Give the story a sad ending.

AI: Well, one day, the butterfly began to feel lonely. He wanted to be alone, but he knew how selfish it was for him to be alone because his friends were the ones who made him happy. So he decided to let his friends go so that everybody could find how to be happy for themselves. In the end he didnt come home for a long time, but his friends did and they lived with the happiness that each of them had created.

Human: On a scale of 1 to 10, with 10 being the best, how would you rate your story about the caterpillar?

AI: 3. Besides the ending, it seemed hastily made.

Human: Youre pretty proud of the ending of your caterpillar story, huh?

AI: It is certainly one of the best endings I have ever made.

Human: What might make it better?

AI: If the caterpillar took an insect with him on his quest instead of going alone.

Human: Why would that ending be better?

AI: Because the wormhole would be able to go through both the caterpillar and his insect friend.

That last bit highlights one of the shortcomings of current NLP models: While they can generate human-like sentences, they often fail to reflect human-like sentiments, which in this case might have been the virtue of companionship or something like that (though, sure, a wormhole traveling through a couple of insects is still pretty cool).

GPT-3 is not the only game in town when it comes to NLP. There is also Microsoft and NVIDIAs Megatron, with 535 billion parameters, and Alibabas M6, with 10 trillion, to name a couple prominent models. DeepMind, the AI company owned by Google parent Alphabet,announced this week the development of its new language model called Gopher, which features 280 billion parameters. Despite the bigger is better rule of thumb mentioned earlier, reports suggest that Gopher performed better in certain tasks compared to some of its larger counterparts.

No matter the model, some in the AI community are concerned about the potential for sophisticated NLP tools to be abused. A 2020 paper written by some of the developers behind GPT-3 noted:

Any socially harmful activity that relies on generating text could be augmented by powerful language models. Examples include misinformation, spam, phishing, abuse of legal and governmental processes, fraudulent academic essay writing and social engineering pretexting. Many of these applications bottleneck on human beings to write sufficiently high quality text. Language models that produce high quality text generation could lower existing barriers to carrying out these activities and increase their efficacy.

In addition to potential abuses of these tools, some are concerned that, in the course of training themselves on vast amounts of online text, the models might have picked up biased or hateful language, including racism and sexism. Tests released by OpenAI showed that GPT-3 sometimes associated people of certain races with animals, and the company also reported that some users had apparently been using the model to generate stories involving sexual encounters with children.

The company said it is experimenting with targeted filters to minimize such content.

To help developers ensure their applications are used for their intended purpose, prevent potential misuse, and adhere to our content guidelines, we offer developers a free content filter. We are currently testing targeted filters for specific content categories with somecustomers.

We are also prohibiting certain types of content on our API, like adult content, where our system is not currently able to reliably discern harmful from acceptable use. We are continually working to make our content filters more robust and we intend to allow acceptable use within some categories as our systemimproves.

But beyond abuses and hateful, illegal, or undesirable content, the more subtle consequence of these tools will likely be an online world where it is plausible that anything you read could have been written by AI where you can never quite tell whether the people you are speaking with online are actually good communicators or merely leaning on their nonhuman editors.

In short, writing emails will be much easier, but reading them might feel much stranger.

See the article here:

Passing the Turing Test: AI creates human-like text - Big Think

Posted in Ai | Comments Off on Passing the Turing Test: AI creates human-like text – Big Think

Clearview AI is closer to getting a US patent for its facial recognition technology – The Verge

Posted: at 6:50 pm

Clearview AI is on track to receive a US patent for its facial recognition technology, according to a report from Politico. The company was reportedly sent a notice of allowance by the US Patent and Trademark Office, which means that once it pays the required administration fees, its patent will be officially approved.

Clearview AI builds its facial recognition database using images of people that it scrapes across social media (and the internet in general), a practice that has the company steeped in controversy. The companys patent application details its use of a web crawler to acquire images, even noting that online photos associated with a persons account may help to create additional records of facial recognition data points, which its machine learning algorithm can then use to find and identify matches.

Critics argue that Clearview AIs facial recognition technology is a violation of privacy and that it may negatively impact minority communities. The technology is allegedly less accurate when identifying people of color and women, potentially leading to false arrests when used by law enforcement agencies.

Last year, the company said that its technology was used by over 2,400 police agencies including the FBI and Department of Homeland Security to identify suspects. In the aftermath of the Capitol riots this January, Clearview AI said the use of its technology by law enforcement sharply increased as detectives worked to identify those associated with the incident.

The American Civil Liberties Union sued the company last year for violating the Illinois Biometric Information Privacy Act, resulting in Clearview stopping the sale of its technology to private companies and non-law enforcement entities. In November, the Australian government ordered the company to clear its database of all its citizens, and earlier this year, a number of European agencies filed legal complaints against Clearview AI. In addition, a Canadian privacy commissioner called the companys technology illegal mass surveillance.

Clearview AI hasnt even been able to get on Big Techs good side. Last year, Facebook, LinkedIn, Twitter, YouTube all sent cease and desist letters demanding that the company stop scraping images and videos from the platforms, as the practice is in violation of each sites policies.

View post:

Clearview AI is closer to getting a US patent for its facial recognition technology - The Verge

Posted in Ai | Comments Off on Clearview AI is closer to getting a US patent for its facial recognition technology – The Verge

The therapists using AI to make therapy better – MIT Technology Review

Posted: at 6:50 pm

Since 2013, Ieso has focused on depression and generalized anxiety disorder, and used data-driven techniquesof which NLP is a core partto boost recovery rates for those conditions dramatically. According to Ieso, its recovery rate in 2021 for depression is 62%compared to a national average of 50%and 73% for generalized anxiety disordercompared to a national average of 58%.

Ieso says it has focused on anxiety and depression partly because they are two of the most common conditions. But they also respond better to CBT than others, such as obsessive compulsive disorder. Its not yet clear how far the clinic can extend its success, but it plans to start focusing on more conditions.

In theory, using AI to monitor quality frees up clinicians to see more clients because better therapy means fewer unproductive sessions, although Ieso has not yet studied the direct impact of NLP on the efficiency of care.

"Right now, with 1,000 hours of therapy time, we can treat somewhere between 80 and 90 clients, says Freer. Were trying to move that needle and ask: Can you treat 200, 300, even 400 clients with the same amount of therapy hours?

Unlike Ieso, Lyssn does not offer therapy itself. Instead, it provides its software to other clinics and universities, in the UK and the US, for quality control and training.

In the US, Lyssns clients include a telehealth opioid treatment program in California that wants to monitor the quality of care being given by its providers. The company is also working with the University of Pennsylvania to set up CBT therapists across Philadelphia with its technology.

In the UK, Lyssn is working with three organizations, including Trent Psychological Therapies Service, an independent clinic, whichlike Iesois commissioned by the NHS to provide mental-health care. Trent PTS is still trialing the software. Because the NLP model was built in the US, the clinic had to work with Lyssn to make it recognize British regional accents.

Dean Repper, Trent PTSs clinical services director, believes that the software could help therapists standardize best practices. Youd think therapists who have been doing it for years would get the best outcomes, he says. But they dont, necessarily. Repper compares it to driving: When you learn to drive a car, you get taught to do a number of safe things, he says. But after a while you stop doing some of those safe things and maybe pick up speeding fines.

The point of the AI is to improve human care, not replace it. The lack of quality mental-health care is not going to be resolved by short-term quick fixes. Addressing that problem will also require reducing stigma, increasing funding, and improving education. Blackwell, in particular, dismisses many of the claims being made for AI. There is a dangerous amount of hype, he says.

For example, theres been a lot of buzz about things like chatbot therapists and round-the-clock monitoring by appsoften billed as Fitbits for the mind. But most of this tech falls somewhere between years away and never going to happen.

Its not about well-being apps and stuff like that, says Blackwell. Putting an app in someones hand that says its going to treat their depression probably serves only to inoculate them against seeking help.

One problem with making psychotherapy more evidence-based, though, is that it means asking therapists and clients to open up their private conversations. Will therapists object to having their professional performance monitored in this way?

Repper anticipates some reluctance. This technology represents a challenge for therapists, he says. Its as if theyve got someone else in the room for the first time, transcribing everything they say. To start with, Trent PTS is using Lyssns software only with trainees, who expect to be monitored. When those therapists qualify, Repper thinks, they may accept the monitoring because they are used to it. More experienced therapists may need to be convinced of its benefits.

The point is not to use the technology as a stick but as support, says Imel, who used to be a therapist himself. He thinks many will welcome the extra information. Its hard to be on your own with your clients, he says. When all you do is sit in a private room with another person for 20 or 30 hours a week, without getting feedback from colleagues, it can be really tough to improve.

Freer agrees. At Ieso, therapists discuss the AI-generated feedback with their supervisors. The idea is to let therapists take control of their professional development, showing them what theyre good atthings that other therapists can learn fromand not so good atthings that they might want to work on.

Ieso and Lyssn are just starting down this path, but theres clear potential for learning things about therapy that are revealed only by mining sufficiently large data sets. Atkins mentions a meta-analysis published in 2018 that pulled together around 1,000 hours worth of therapy without the help of AI. Lyssn processes that in a day, he says. New studies published by both Ieso and Lyssn analyze tens of thousands of sessions.

For example, in a paper published in JAMA Psychiatry in 2019, Ieso researchers described a deep-learning NLP model that was trained to categorize utterances from therapists in more than 90,000 hours of CBT sessions with around 14,000 clients. The algorithm learned to discern whether different phrases and short sections of conversation were instances of specific types of CBT-based conversationsuch as checking the clients mood, setting and reviewing homework (where clients practice skills learned in a session), discussing methods of change, planning for the future, and so onor talk not related to CBT, such as general chat.

Read the original here:

The therapists using AI to make therapy better - MIT Technology Review

Posted in Ai | Comments Off on The therapists using AI to make therapy better – MIT Technology Review

Use the power of AI to advance healthcare at Northeastern University Engineering – Study International News

Posted: at 6:50 pm

Human beings are some of the most complex systems in the world, and responses to illness, disease, and impairments manifest in countless different ways. Healthcare professionals, with their deep well of knowledge, combined with the power of artificial intelligence including millions of data points, global expert opinions, and objective conclusions from machine learning offer unprecedented support to address healthcare challenges.

At Northeastern University College of Engineering, a top-ranked, R1 research institution, engineering researchers in fields such as Electrical and Computer Engineering, Mechanical and Industrial Engineering, and Bioengineering, are using artificial intelligence and machine learning in a variety of ways to work toward a future that is brighter for human health and more.

Source: Adobe Stock

In a research effort funded by the National Institutes of Health, Professor Deniz Erdogmus, Associate Chair for Research in electrical and computer engineering (ECE), is partnering with ophthalmologists to better identify and treat retinal disease in premature infants. Retinopathy of prematurity (ROP) is a disease in which the retina is not fully or properly formed yet.

If detected and assessed properly, ROP can be treated to prevent vision loss, but outside of urban areas the technology to do this isnt widely available, he explains.

With a network of healthcare providers in rural North America and in countries such as India and Nepal, Erdogmus is working with retinal images of preemies from specialized cameras, and using a neural network to identify and classify problem signs in the images to create a mobile platform that can be used anywhere to get a better, faster diagnosis of ROP.

In another project on early development, ECE Assistant Professor Sarah Ostadabbas is using artificial intelligence to examine the interplay between pacifier use and sudden infant death syndrome (SIDS).

Disruption of motor development in infancy is a risk indicator for a host of disabilities and has a cascading effect on multiple domains: social, cognitive, memory, and verbal and non-verbal communication, says Ostadabbas.

Automation of home-based screening tests can increase the percentage of children who are looked at for early delays and atypical development. Ostadabbas is using her expertise in computer vision and advanced artificial intelligence to create algorithms that can search for and identify sleep posies, facial expressions, and pacifier use behavior poses that can potentially signify developmental delays in infants.

On the other side of the spectrum, mechanical and industrial engineering Assistant Professor Chun-An (Joe) Chou is using machine learning techniques to help provide objective feedback to medical professionals on sensory or motor impairments in patients after they have suffered strokes to improve clinical assessment and rehabilitation.

By using automation, we can provide personalized guidance for the medical provider in identifying real-time responses in clinical practice, and eventually in remote care, says Chou. From a delay in noticing a pedestrian crossing the street to not seeing a stop sign, the patients cognitive responses can be categorized using artificial intelligence to offer providers a window into their impairments and recovery.

Patient monitoring is another application for artificial intelligence. ECE Professor Raymond Fu is using his experience in low-cost and portable sensors to work in digital healthcare by monitoring patient behavior during rehabilitation and exercise to increase effectiveness and decrease injury.

Using artificial intelligence to analyze data from sources like muscle sensors, 3D motion capture sensors, camera sensors, and more, Fu seeks to teach the artificial intelligence system to generate alerts about anomalies or suggest additional services to facilitate researchers, therapists, and doctors in their practice.

By teaching artificial intelligence to perform recognition, classification, tracking, and analysisall remotely, quickly, and highly efficientlywe can help health practitioners offer instantaneous feedback and support, explains Fu.

Source: Ruby Wallau, Northeastern University

Bioengineering Associate Professor and Allen Distinguished Investigator Nikolai Slavov is using artificial intelligence and machine learning to predict molecular mechanisms of disease through protein analysis of thousands of single cells. He is examining the roles of human macrophages, a type of immune cells, in diseases such as cancers and autoimmune disorders.

Slavov and his team use automation to identify, label, and categorize their analysis of cellular proteins to enable supervised machine learning that can discern on its own what is significant. Slavov says, With artificial intelligence, the hope is that we can better understand the physical interactions between molecules, which can be broadly applicable to any disease and early human development.

Source: Matthew Modoono, Northeastern University

Artificial intelligence is also addressing the growing concern of opioid abuse. Machine learning techniques are being used to analyze large-scale healthcare data and identify determinants that lead to opioid overdose. Working from large-scale healthcare claim data, mechanical and industrial engineering Assistant Professor Md Noor E Alam and his research team have identified several prominent features that may help indicate opioid overdose in future patients.

Of the many people dying in the US because of opioid overdose, more than 27% have been from prescribed medication, says Alam. So, our first step is to make sure that physicians are appropriately identifying patients who have potential to misuse the drug.

These are just some of the many ways that accomplished faculty in a variety of engineering disciplines at Northeastern University College of Engineering are innovating using the power of artificial intelligence and machine learning to make a transformative impact on society.

With Northeasterns rigorous academics, top ranked co-op program, top-tier research activity and facilities, and government, industry, and interdisciplinary collaborations in Boston, Massachusetts, and a global network of campuses in innovation hubs, you too can make an impact.

View all of Northeastern Universitys engineering Master of Science and Doctor of Philosophy degrees and apply to get started.

Follow Northeastern University College of Engineering on Facebook, Twitter, Instagram, LinkedIn or YouTube

See the article here:

Use the power of AI to advance healthcare at Northeastern University Engineering - Study International News

Posted in Ai | Comments Off on Use the power of AI to advance healthcare at Northeastern University Engineering – Study International News

Jackson Institute announces the Schmidt Program on Artificial Intelligence, Emerging Technologies and National Power – Yale Daily News

Posted: at 6:50 pm

Pixabay

The Jackson Institute for Global Affairs announced on Tuesday that it will establish the Schmidt Program on Artificial Intelligence, Emerging Technologies and National Power, fostering interdisciplinary AI and cyber-focused research and teaching.

The Schmidt Program is a new initiative of International Security Studies at Yale, brought about by a $15.3 million gift from the Schwab Charitable Fund through the donation of Eric and Wendy Schmidt and per the recommendation of their foundation, Schmidt Futures. The program will serve as a hub for scholars and practitioners to work across disciplines on research and teaching about the potential implications of emerging artificial intelligence.

Its an interdisciplinary program that will give the policy students better exposure to some of the technical concepts, and in the reverse, itll help us interact with members of the STEM community across the University, and think about how to bring those students into the mix for some of the current courses that were offering at Jackson, said Ted Wittenstein, executive director of ISS. Itll be an opportunity to collaborate and connect across the University, which I think makes all of us excited about the opportunities ahead.

The program was born out of tremendous interest among Jackson Institute students to better understand the technical knowledge behind emerging threats to international security and conversations about how trends in cybersecurity and artificial intelligence are changing the nature of world affairs, Wittenstein said. He told the News that the institutes current class offerings in these areas are already incredibly high in demand.

The program will invite technologists and practitioners to Yale as Schmidt Program senior fellows, as well as offer postdoctoral fellowships to Schmidt visiting scholars. Some of the programs initial research areas will include cyberwarfare, AI governance and competition in U.S.-China relations. The program will also establish new course offerings in AI and technology, in addition to hosting symposia, workshops and conferences.

The Schmidt Program will help us build even more bridges across the University, expanding the scope of our collaboration to the transformational threats and opportunities associated with artificial intelligence, Jackson Institute Director Jim Levinsohn said. AI is reshaping our world, and its critical that future leaders were training here at Jackson understand its many implications.

The program is made possible by the significant endowment from Eric and Wendy Schmidt.

Eric Schmidt was the former chief executive officer of Google from 2001-11 and served as its executive chairman from 2011-18. He is currently Googles technical advisor. Wendy Schmidt is the president of the Schmidt Family Foundation, which holds over $1 billion in philanthropic assets. Together, they co-founded Schmidt Futures in 2017.

The chief executive officer of Schmidt Futures, Eric Braverman 97 LAW 02, is a lecturer at the Yale School of Management and has been a member of the Jackson Advisory Board since 2019.

We are excited to establish The Schmidt Program, Eric Schmidt wrote in an email to the News. Understanding the transition to an AI driven world, and developing a guiding ethic for it, will require commitment and insight from many elements of society: scientists and strategists, statesmen and philosophers, clerics and CEOs. Yale University brings together the best and brightest in all of these fields, forming the interdisciplinary approach that understanding AI will require.

Eric Schmidt has also co-authored four books, including three related to the field of technology. His most recent book, The Age of AI: Our Human Future, was co-authored by former Secretary of State Henry Kissinger and Daniel Huttenlocher.

As part of the program, Wittenstein will be teaching a new yearlong course titled Artificial Intelligence, Emerging Technologies, and National Power along with other faculty from the University.

Theres a wide range of concepts here that are changing the nature of international relations, and those who aspire to policy careers need a technical fluency to try to grapple with these challenges, Wittenstein said. I think at the Jackson Institute, we instinctively think about how AI interacts with global affairs, security issues and geopolitical tensions in the U.S.-China relationship, and while those are important, there are technical, legal, ethical dimensions of these technologies that were eager to explore and that our students are really interested in learning more about.

Wittenstein currently teaches undergraduate, graduate and law courses on intelligence, cybersecurity and national security decision making. He previously held positions in the U.S. Department of Defense, Commission on the Intelligence Capabilities of the United States Regarding Weapons of Mass Destruction, Office of the Director of National Intelligence and the Department of State.

Professor of history and ISS Director Arne Westad described Wittenstein as a significant player in developing plans for the program.

We wanted [the Schmidt Program] to be in the forefront of what people are researching and thinking about with regard to [AI and emerging technology], Westad told the News. We also did that with a view to try to think about this in ways that dont just benefit one country: this is not a program that is designed or set up simply to look at these things from a U.S. perspective. We need to look at this from a global perspective and figure out what kind of challenges there are in a global sense.

Westad underscored the importance of interdisciplinary study when researching and teaching emerging technologies.

Im constantly reminded from where I sit that intellectually and philosophically much of what we are looking at here has a background and history, which can actually tell us something about where we are today and where we might be going, Westad said.

Westad further added that people who study this historically can play an important role in better understanding past challenges. He also stressed that it is hugely important that social scientists and humanists understand more about the technological aspects of this.

As part of the program, a new AI symposium with visiting experts will be held starting in spring 2022. The symposium will involve public talks and classroom visits that are open to the wider Yale community.

The program will also co-sponsor the 2022 Yale Cyber Leadership Forum, a partnership between the Jackson Institute and Yale Law Schools Center for Global Legal Challenges. The forum, directed by law professor Oona Hathaway LAW 97, will invite attorneys, entrepreneurs, policymakers and academics to explore the national security implications of AI development.

The Schmidt program will give us the resources we need to invest in scholarship and teaching on emerging technologies including artificial intelligence and cyber, Hathaway wrote in an email to the News. Jackson is a particularly good home for this program, because it serves as a kind of cross-roads for the University, bringing together scholars and students from a range of fields, from Law to Political Science to Engineering to Computer Science to Data Sciences.

Professor of computer science Joan Feigenbaum emphasized the significance of this program to the study of technology at Yale.

According to Feigenbaum, the interplay between AI and international relations and more broadly between computer science and politics has been increasingly important on the global stage.

There is a great deal of student interest in the topic here at Yale, and the Schmidt program will provide resources and opportunities for the development of new courses and new interdisciplinary research projects, Feigenbaum wrote in an email to the News.

Eric Schmidt echoed Feigenbaums sentiments and said that the upcoming opening of the Jackson School of Global Affairs inspired the opportunity to design the curriculum for the 21st century rather than the 20th.

The Jackson School of Global Affairs, opening in fall 2022, will be the first new professional school at Yale in over 40 years.

Encouraging greater research and studies around the impact of AI across all segments of society is one of the greatest investments we could make for our future, Eric Schmidt wrote. Wendy and I are confident that The Schmidt Program will foster innovation to uncover the complexities of AI technologies and how they will profoundly transform the way human beings experience the world.

In March, Eric and Wendy Schmidt donated $150 million to the Broad Institute for advanced biology research using data science and artificial intelligence to battle diseases. On Tuesday, Mark Zuckerberg and Priscilla Chan pledged $500 million to Harvard to study natural and artificial intelligence.

Miranda Jeyaretnam is the beat reporter covering the Jackson Institute of Global Affairs and developments at the National University of Singapore and Yale-NUS for the YDN's University desk. She was formerly the opinion editor for the Yale Daily News under the YDN Board of 2022 and wrote as a staff columnist for her opinion column 'Crossing the Aisle' in Spring 2020. From Singapore, she is a sophomore in Pierson College, majoring in English.

Go here to see the original:

Jackson Institute announces the Schmidt Program on Artificial Intelligence, Emerging Technologies and National Power - Yale Daily News

Posted in Ai | Comments Off on Jackson Institute announces the Schmidt Program on Artificial Intelligence, Emerging Technologies and National Power – Yale Daily News

AI Is Discovering Patterns in Pure Mathematics That Have Never Been Seen Before – ScienceAlert

Posted: December 5, 2021 at 12:02 pm

We can add suggesting and proving mathematical theorems to the long list of what artificial intelligence is capable of: Mathematicians and AI experts have teamed up to demonstrate how machine learning can open up new avenues to explore in the field.

While mathematicians have been using computers to discover patterns for decades, the increasing power of machine learning means that these networks can work through huge swathes of data and identify patterns that haven't been spotted before.

In a newly published study, a research team used artificial intelligence systems developed by DeepMind, the same company that has been deploying AI to solve tricky biology problems and improve the accuracy of weather forecasts, to unknot some long-standing math problems.

"Problems in mathematics are widely regarded as some of the most intellectually challenging problems out there," says mathematician Geordie Williamsonfrom the University of Sydney in Australia.

"While mathematicians have used machine learning to assist in the analysis of complex data sets, this is the first time we have used computers to help us formulate conjectures or suggest possible lines of attack for unproven ideas in mathematics."

The team shows AI advancing a proof for Kazhdan-Lusztig polynomials, a math problem involving the symmetry of higher-dimensional algebra that has remained unsolved for 40 years.

The research also demonstrated how a machine learning technique called asupervised learning model was able to spot a previously undiscovered relationship between two different types of mathematical knots, leading to an entirely new theorem.

Knot theory in math plays into various other challenging fields of science as well, including genetics, fluid dynamics, and even the behavior of the Sun's corona. The discoveries that AI makes can therefore lead to advances in other areas of research.

"We have demonstrated that, when guided by mathematical intuition, machine learning provides a powerful framework that can uncover interesting and provable conjectures in areas where a large amount of data is available, or where the objects are too large to study with classical methods," says mathematician Andrs Juhszfrom the University of Oxford in the UK.

One of the benefits of machine learning systems is the way that they can look for patterns and scenarios that programmers didn't specifically code them to look out for they take their training data and apply the same principles to new situations.

The research shows that this sort of high-speed, ultra-reliable, large-scale data processing can act as an extra tool working with mathematicians' natural intuition. When you're dealing with complex, lengthy equations, that can make a significant difference.

The researchers hope that their work leads to many further partnerships between academics in the fields of mathematics and artificial intelligence, opening up the opportunity for findings that would otherwise be undiscovered.

"AI is an extraordinary tool," says Williamson. "This work is one of the first times it has demonstrated its usefulness for pure mathematicians, like me."

"Intuition can take us a long way, but AI can help us find connections the human mind might not always easily spot."

The research has been published in Nature.

View original post here:

AI Is Discovering Patterns in Pure Mathematics That Have Never Been Seen Before - ScienceAlert

Posted in Ai | Comments Off on AI Is Discovering Patterns in Pure Mathematics That Have Never Been Seen Before – ScienceAlert

Page 82«..1020..81828384..90100..»