New AGI hardware in progress for artificial general intelligence – Information Age

The partnership between SingularityNET and Simuli.ai aims to speed up artificial general intelligence advancement, and will focus on the creation of a Metagraph Pattern Matching Chip (MPMC).

This new chip will host the two knowledge graph search algorithms, Breadth-First Search (BFS) and Depth-First Search (DFS).

Combining these building blocks for AI systems into one chip can enable more intuitive knowledge representation, reasoning and decision-making.

Once created, the MPMC will be integrated with Simulis pre-existing Hypervector Chip which is used for processing data patterns with fewer processors than traditional hardware to create an AGI board that aims to accelerate realisation of artificial general intelligence capabilities.

The new hardware is set to be utilised by SingularityNETs spin-off project TrueAGI to offer AGI-as-a-service to enterprise organisations.

Together, SingularityNET and Simuli.ai aim to mitigate common hardware constraints faced by AI developers, such as being limited to graphics processing units (GPU), which help devices handle graphics, effects and videos.

In addition, the project looks to lower the cost of AI training and interference by allowing for this to be achieved with less required hardware.

The Simuli AGI board has strong potential to catalyse emergence of a new era of AI techniques and functions, said Dr. Ben Goertzel, CEO of SingularityNET.

The core of what we need to progress from narrow AI to AGI is of course the right cognitive architectures and learning and reasoning algorithms but without the right hardware, even the best mathematics and software cant run efficiently enough to have practical impact.

So many AI methods weve been working on for decades, are going to finally be able to show their stuff in a practical sense when given the right hardware to run on.

Rachel St.Clair, CEO of Simuli.ai, commented: The power of optimising large scale AGI models to run faster by leveraging Simulis hardware platform is multifold.

First, these AGI frameworks get rapid development that wasnt exactly possible without large compute cost prior. Then, such a device as the AGI motherboard can expand the types of code that can be run scalably and efficiently in a single instance of the AGI model, for example Hyperon.

Also, scalable computing is better for longevity of the planet and the technology itself, so optimising on both the SW/HW sides is key. This will likely result in AGI thats better for everyone. Were excited to be playing a role in tipping the scale from AI to AGI.

Artificial general intelligence a forward-looking term referring to machine intelligence that can solve problems and complete tasks to the same standard as a human has been cited as a next step in AI development, with generative AI being the most prominent current innovation trend in the space. Read more about AGI here.

Why CIOs are turning to knowledge graphs for critical business help Exploring the value of knowledge graphs for chief information officers (CIOs).

What ChatGPT means for developers Will ChatGPT replace software developers, or will it make their jobs more enjoyable by doing the heavy lifting when it comes to writing boilerplate code, debugging and testing?

Visit link:

New AGI hardware in progress for artificial general intelligence - Information Age

AI At The Crossroads: Navigating Job Displacement, Ethical Concerns, And The Future Of Work – Forbes

Artificial intelligence (AI) is gaining more attention as its role in the future of work becomes increasingly apparent.

Last week, the Writers Guild of America (WGA), went on a strike because of the proposed use of AI, specifically ChatGPT, in television and film writing. The guild argued that the use of AI would replace jobs, increase compensation disparities and lead to greater job insecurity for writers, reported Time. While this was happening, Geoffrey Hinton, the 75-year-old scientist widely seen as the godfather of AI, announced his resignation from Google, warning of the growing dangers in the field.

The BBC reported that Hinton, whose research on neural networks and deep learning has paved the way for AI systems like ChatGPTwhich according to the Wall Street Journal is causing a stock-market ruckusexpressed regret over his work and raised concerns about bad actors potential misuse of AI. Hintons departure comes at a time when AI advancements are accelerating at an unprecedented pace. For example, KPMG announced last week that they would make generative AI available to all employees, including partners, for both client-facing and internal work.

Meanwhile, during an interview with the Wall Street Journal, DeepMind CEO Demis Hassabis expressed his belief that a form of Artificial General Intelligence (AGI) could be developed within a few years. Elsewhere, implications for medical leaders are becoming apparent. According to Erwin Loh, who explained in BMJ Leader, new technologies like ChatGPT and generative AIhave the potential to transform the way we practice medicine, and revolutionize the healthcare system. Lohs article provided a great explanation of AI technologies in the context of healthcare and also offered insights into how they could be used to improve delivery.

So, its clear there is enormous potential to revolutionize the world of work. The question now is: how do we make sure that AI works for us rather than against us? After all, the opportunities are vast and growing. For example, research published by MIT Sloan Management Review concluded that Data can help companies better understand and improve the employee experience, leading to a more productive workforce. But, it must be remembered that job displacement is a genuine concern. Insider reported that CEOs get closer to finally saying itAI will wipe out more jobs than they can count.

One study conducted by researchers from OpenAI, OpenResearch, and the University of Pennsylvania, revealed that around 80% of the US workforce could see at least 10% of their tasks affected by the introduction of GPTs (Generative Pre-trained Transformers), with around 19% of workers experiencing at least 50% of their tasks impacted. Having reviewed the study, Natalia Weisz, a professor at Argentinas IAE Business School, concluded in an interview that, unlike previous technological revolutions, higher-paying occupations with more education requirements, such as degrees and even doctorates, are more exposed compared to those that do not require a profession. We are moving into a phase in which traditional professions may very well be disrupted, said Weisz.

We are living in a time of rapid technological change. We must be mindful to ensure that these advances do not lead to job losses or create an unequal playing field, said Shrenik Rao, editor-in-chief of Madras Courier, in an interview. Rao predicted that Bots could replace journalists and columnists. Illustrators, cartoonists and artists could lose their jobs, too. Instead of telling stories in the public interest, stories will be produced based on what will garner views or clicks.

Rao, who is also a columnist at Haaretz, went on to probe the ethical implications of AI-driven news production. What will happen with journalistic ethics? Will the news be produced to serve certain political agendas? Will there be an objective filter for news and images? He concluded that a lack of transparency over how AI is used in journalism could lead to further mistrust in the media.

Governments, industries, and individuals need to engage in a collaborative effort to navigate this brave new world. By fostering open conversations, creating robust regulatory frameworks, and prioritizing education and adaptation, we can ensure that artificial intelligence serves as a force for good, empowering humanity to overcome challenges and reach new heights. Leadership is, therefore, required to ensure that AI is used responsibly and ethically: it is time for all to come together and propel AI forward in a way that works for everyone.

Disclaimer: The author of this article is an Associate Editor at BMJ Leader. This role is independent and distinct from his role as the author of this article. It should be noted that despite his position at BMJ Leader, he had no participation in the review, production, or publication of the academic paper referenced in this articlespecifically, the work by Erwin Loh on the potential of AI technologies in healthcare.

I'm a leadership professor writing expert commentary on global affairs

Read more:

AI At The Crossroads: Navigating Job Displacement, Ethical Concerns, And The Future Of Work - Forbes

I created a billion-pound start-up business Elon Musk & Jeff Bezos asked to meet me heres the secret to… – The Sun

A DAD who created a billion-pound start-up business has revealed the secret to his success.

Emad Mostaque, 40, is the founder and CEO of artificial intelligence giant Stability AI and has recently been in talks with the likes of Elon Musk and Jeff Bezos.

But the London dad-of-two has worked hard to get where he is today - and doesn't plan on stopping any time soon.

Emad has gone from developing AI at home to help his autistic son, to employing 150 people across the globe for his billion-pound empire.

The 40-year-old usually calls Notting Hill home, but has started travelling to San Francisco for work.

On his most recent trip, Emad met with Bezos, the founder and CEO of Amazon, and made a deal with Musk, the CEO of Twitter.

He says the secret to his success in the AI world is using it to help humans, not overtake them.

Emad told The Times: I have a different approach to everyone else in this space, because Im building narrow models to augment humans, whereas almost everyone else is trying to build an AGI [Artificial general intelligence] to pretty much replace humans and look over them.

Emad is from Bangladesh but his parents shifted to the UK when he was a boy and settled the family in London's Walthamstow.

The dad said he was always good at numbers in school but struggled socially as he has Aspergers and ADHD.

The 40-year-old studied computer science and maths at Oxford, then became a hedge fund manager.

But when Emad's son was diagnosed with autism he quit to develop something to help the youngster.

Emad recalled: We built an AI to look at all the literature and then extract what could be the case, and then the drug repurposing.

He says that homemade AI allowed his family create an approach that took his son to a better, more cheerful place.

And, as a result, Emad inspired himself.

He started a charity that aims to give tablets loaded with AI tutors to one billion children.

He added: Can you imagine if every child had their own AI looking out for them, a personalised system that teaches them and learns from them?

"In 10 to 20 years, when they grow up, those kids will change the world.

Emad also founded the billion-pound start-up Stability AI in recent years, and it's one of the companies behind Stable Diffusion.

The tool has taken the world by storm in recent months with its ability to create images that could pass as photos from a mere text prompt.

Today, Emad is continuing to develop AI - and he says it is one of the most important inventions of history.

He explained it as somewhere between fire and the internal combustion engine.

The rest is here:

I created a billion-pound start-up business Elon Musk & Jeff Bezos asked to meet me heres the secret to... - The Sun

Opinion | We Need a Manhattan Project for AI Safety – POLITICO

At the heart of the threat is whats called the alignment problem the idea that a powerful computer brain might no longer be aligned with the best interests of human beings. Unlike fairness, or job loss, there arent obvious policy solutions to alignment. Its a highly technical problem that some experts fear may never be solvable. But the government does have a role to play in confronting massive, uncertain problems like this. In fact, it may be the most important role it can play on AI: to fund a research project on the scale it deserves.

Theres a successful precedent for this: The Manhattan Project was one of the most ambitious technological undertakings of the 20th century. At its peak, 129,000 people worked on the project at sites across the United States and Canada. They were trying to solve a problem that was critical to national security, and which nobody was sure could be solved: how to harness nuclear power to build a weapon.

Some eight decades later, the need has arisen for a government research project that matches the original Manhattan Projects scale and urgency. In some ways the goal is exactly the opposite of the first Manhattan Project, which opened the door to previously unimaginable destruction. This time, the goal must be to prevent unimaginable destruction, as well as merely difficult-to-anticipate destruction.

Dont just take it from me. Expert opinion only differs over whether the risks from AI are unprecedentedly large or literally existential.

Even the scientists who set the groundwork for todays AI models are sounding the alarm. Most recently, the Godfather of AI himself, Geoffrey Hinton, quit his post at Google to call attention to the risks AI poses to humanity.

That may sound like science fiction, but its a reality that is rushing toward us faster than almost anyone anticipated. Today, progress in AI is measured in days and weeks, not months and years.

As little as two years ago, the forecasting platform Metaculus put the likely arrival of weak artificial general intelligence a unified system that can compete with the typical college-educated human on most tasks sometime around the year 2040.

Now forecasters anticipate AGI will arrive in 2026. Strong AGIs with robotic capabilities that match or surpass most humans are forecasted to emerge just five years later. With the ability to automate AI research itself, the next milestone would be a superintelligence with unfathomable power.

Dont count on the normal channels of government to save us from that.

Policymakers cannot afford a drawn-out interagency process or notice and comment period to prepare for whats coming. On the contrary, making the most of AIs tremendous upside while heading off catastrophe will require our government to stop taking a backseat role and act with a nimbleness not seen in generations. Hence the need for a new Manhattan Project.

A Manhattan Project for X is one of those clichs of American politics that seldom merits the hype. AI is the rare exception. Ensuring AGI develops safely and for the betterment of humanity will require public investment into focused research, high levels of public and private coordination and a leader with the tenacity of General Leslie Groves the projects infamous overseer, whose aggressive, top-down leadership style mirrored that of a modern tech CEO.

Ensuring AGI develops safely and for the betterment of humanity will require a leader with the tenacity of General Leslie Groves, Hammond writes.|AP Photo

Im not the only person to suggest it: AI thinker Gary Marcus and the legendary computer scientist Judea Pearl recently endorsed the idea as well, at least informally. But what exactly would that look like in practice?

Fortunately, we already know quite a bit about the problem and can sketch out the tools we need to tackle it.

One issue is that large neural networks like GPT-4 the generative AIs that are causing the most concern right now are mostly a black box, with reasoning processes we cant yet fully understand or control. But with the right setup, researchers can in principle run experiments that uncover particular circuits hidden within the billions of connections. This is known as mechanistic interpretability research, and its the closest thing we have to neuroscience for artificial brains.

Unfortunately, the field is still young, and far behind in its understanding of how current models do what they do. The ability to run experiments on large, unrestricted models is mostly reserved for researchers within the major AI companies. The dearth of opportunities in mechanistic interpretability and alignment research is a classic public goods problem. Training large AI models costs millions of dollars in cloud computing services, especially if one iterates through different configurations. The private AI labs are thus hesitant to burn capital on training models with no commercial purpose. Government-funded data centers, in contrast, would be under no obligation to return value to shareholders, and could provide free computing resources to thousands of potential researchers with ideas to contribute.

The government could also ensure research proceeds in relative safety and provide a central connection for experts to share their knowledge.

With all that in mind, a Manhattan Project for AI safety should have at least 5 core functions:

1. It would serve a coordination role, pulling together the leadership of the top AI companies OpenAI and its chief competitors, Anthropic and Google DeepMind to disclose their plans in confidence, develop shared safety protocols and forestall the present arms-race dynamic.

2. It would draw on their talent and expertise to accelerate the construction of government-owned data centers managed under the highest security, including an air gap, a deliberate disconnection from outside networks, ensuring that future, more powerful AIs are unable to escape onto the open internet. Such facilities would likely be overseen by the Department of Energys Artificial Intelligence and Technology Office, given its existing mission to accelerate the demonstration of trustworthy AI.

3. It would compel the participating companies to collaborate on safety and alignment research, and require models that pose safety risks to be trained and extensively tested in secure facilities.

4. It would provide public testbeds for academic researchers and other external scientists to study the innards of large models like GPT-4, greatly building on existing initiatives like the National AI Research Resource and helping to grow the nascent field of AI interpretability.

5. And it would provide a cloud platform for training advanced AI models for within-government needs, ensuring the privacy of sensitive government data and serving as a hedge against runaway corporate power.

The alternative to a massive public effort like this attempting to kick the can on the AI problem wont cut it.

The only other serious proposal right now is a pause on new AI development, and even many tech skeptics see that as unrealistic. It may even be counterproductive. Our understanding of how powerful AI systems could go rogue is immature at best, but stands to improve greatly through continued testing, especially of larger models. Air-gapped data centers will thus be essential for experimenting with AI failure modes in a secured setting. This includes pushing models to their limits to explore potentially dangerous emergent behaviors, such as deceptiveness or power-seeking.

The Manhattan Project analogy is not perfect, but it helps to draw a contrast with those who argue that AI safety requires pausing research into more powerful models altogether. The project didnt seek to decelerate the construction of atomic weaponry, but to master it.

Even if AGIs end up being farther off than most experts expect, a Manhattan Project for AI safety is unlikely to go to waste. Indeed, many less-than-existential AI risks are already upon us, crying out for aggressive research into mitigation and adaptation strategies. So what are we waiting for?

Originally posted here:

Opinion | We Need a Manhattan Project for AI Safety - POLITICO

Artificial Intelligence Will Take Away Jobs and Disrupt Society, says Zerodha CEO Nithin Kamath – DATAQUEST

The emergence of artificial general intelligence (AGI) brings both positive and negative implications. On the positive side, AGI has the potential to significantly enhance the productivity and effectiveness of professionals in various fields. By leveraging its capabilities, experts can achieve higher levels of efficiency and accomplish tasks more effectively than ever before. However, alongside these advancements, the rise of AGI also raises valid concerns. One major worry is the potential loss of jobs due to automation.

Along the same lines, Nithin Kamath, founder and CEO, Zerodha tweeted that while they would never fire any of their employees over a piece of technology, the concerns about AI taking away jobs and disrupting the society on the whole was real. Weve just created an internal AI policy to give clarity to the team, given the AI/job loss anxiety. This is our stance: We will not fire anyone on the team just because we have implemented a new piece of technology that makes an earlier job redundant. In 2021, wed said that we hadnt found AI use cases when everyone was claiming to be powered by AI without any AI. With recent breakthroughs in AI, we finally think AI will take away jobs and can disrupt society, he said.

As AGI becomes more sophisticated, there is a risk that certain professions might be replaced by intelligent machines, leading to unemployment and economic disruption. This calls for thoughtful consideration of strategies to address the impact on the workforce and ensure a smooth transition to the era of AGI. Kamath, quoting an internal chat, said. AI on its own wont wake up and kill us all (for a while, at least!). The current capitalistic and economic systems will rapidly adopt AI, accelerating inequality and loss of human agency. Thats the immediate risk.

Another concern is the ethical and safety implications associated with AGI development. AGI systems possess immense computational power and may exhibit behaviors and decision-making processes that are difficult to predict or control. Ensuring that AGI systems align with human values, ethics, and safety standards becomes paramount to prevent unintended consequences or misuse of this powerful technology.

In todays capitalism, businesses prioritize shareholder value creation above stakeholders like employees, customers, vendors, the country, and the planet. Markets incentivize business leaders to prioritize profits over everything else; if not, shareholders vote them out. Many companies will likely let go of employees and blame it on AI. In the process, companies will earn more and make their shareholders wealthier, worsening wealth inequality. This isnt a good outcome for humanity, opined Kamath.

Moreover, there are broader societal and philosophical concerns regarding AGIs impact on human existence. Questions about the potential loss of human uniqueness, the boundaries of consciousness, and the moral responsibility associated with creating highly intelligent machines raise profound ethical dilemmas that require careful reflection and regulation. While the hope is for governments worldwide to put some guardrails, it may be unlikely given the deglobalization rhetoric. No country would want to sit idle while another becomes more powerful on the back of AI, cautioned Kamath.

In summary, while the advent of artificial general intelligence offers significant benefits, such as improved professional efficiency, it also introduces legitimate concerns. It is crucial to address the potential socioeconomic impacts, ethical considerations, and philosophical questions associated with AGI to harness its potential for the betterment of humanity.

Visit link:

Artificial Intelligence Will Take Away Jobs and Disrupt Society, says Zerodha CEO Nithin Kamath - DATAQUEST

China’s State-Sponsored AI Claims it Will Surpass ChatGPT by End … – Tom’s Hardware

Chinese company iFlytek yesterday threw itself at OpenAI's bread and butter by announcing a product that's aimed to compete with ChatGPT. The company's "Spark Desk" was described by the company's founder and president Liu Qingfengas a "cognitive big model" and even as the "dawn of artificial general intelligence." Beyond those buzzwords was also a promise, however: that Spark Desk would surpass OpenAI's ChatGPT by end of year.

We should be happy that we can chalk some of the above to corporate marketing buzzwords. I can assure you my mind will be elsewhere if/when I have to write an article announcing that Artificial General Intelligence (AGI) is here. Perhaps even more so if that AGI was Chinese, as I'm unsure I can trust an AGI that thinks social scoring systems are the bread and butter of its "cognitive big model."

All that aside, however, there are a number of interesting elements to this release. Every day we hear of another ChatGPT spawn, whether official or unofficially linked to the work of OpenAI. With the tech's impact being what it is (even if that impact is still cloudy and mostly unrealized), it was only natural that every player with enough money and expertise to pursue its own models adapted to their own public and stakeholders would do so.

Of course, the question is whether iFlyTek and Spark Desk can actually deliver on its claims, specifically that of one-upping OpenAI at its own game. The answer will likely depend on multiple factors and how you view the subject.

ChatGPT wasn't made for the Eastern public. There's a training data, linguistic and cultural chasm (opens in new tab) that separates ChatGPT's impact on the Eastern seaboard compared to the Western world. And by that definition, it's entirely possible that "Spark Desk" will offer Eastern users a much improved (and more relevant) user experience compared to ChatGPT, given enough maturation time. Perhaps that could even happen before the end of the year. It certainly already offers a better experience for Chinese users in particular, as the country pre-emptively banned ChatGPT from passing beyond its Great Firewall (except in Hong Kong).

The decision to ban ChatGPT likely stifled innovation that it would have otherwise triggered. We need only look to our own news outlets to see the number of industries being impacted by the tech. That's something no country can willingly give up on at a whim; it really was simply a matter of time before a competent competitor was announced.

We'll have to wait for years' end to see whether iFlytek's claims materialize or evaporate. It'll be hard enough to quantitatively compare the two LLMs, especially when their target users are so culturally different. One thing is for sure: OpenAI won't simply rest on its laurels and wait for other industry players to catch up, especially not when there's a target date for that to happen.

The ChatGPT version iFlytek's Spark Model will have to contend with won't be the same GPT we know today. Perhaps OpenAI's expertise and time-to-market advantages will keep it ahead in the race (and that's what we'd expect); but we also have to remember there are multiple ways to achieve a wanted result. It's been shown that the U.S.'s technological sanctions against China have had less of an effect than hoped for, and that the country is willing to shoulder the burden (and costs) of paying for the training of cutting-edge technology in outdated, superseded hardware millions of dollars and hundreds of extra training hours be damned.

A few extra billions could be just enough to bridge the gap. That's China's bet, at least.

See more here:

China's State-Sponsored AI Claims it Will Surpass ChatGPT by End ... - Tom's Hardware

Towards Artificial General Intelligence, ChatGPT 5 is on Track – Analytics Insight

Towards artificial general intelligence, ChatGPT 5 is on track and has already begun to trend

Towards artificial intelligence and general intelligence, ChatGPT 5 is on track and has already begun to trend on Twitter, with many people guessing about the ChatGPT platforms future evolution. These ChatGPT-5 Twitter debates have gotten even more interest than those over ChatGPT-4. Many users have great hopes for the future edition, believing that it would feature immaculate visuals, something that ChatGPT-4 has yet to achieve.

OpenAI is actively developing new features for ChatGPT and intends to release GPT-5 later this winter. According to studies on GPT-5s capabilities, OpenAI may be on the verge of reaching Artificial General Intelligence (AGI) and becoming practically indistinguishable from a person in its capacity to create natural language answers.

While ChatGPT may become indistinguishable from a person in terms of natural language answers, it will still outperform the human brain in terms of data processing and content creation. ChatGPT already gained considerable new features as a result of the recent upgrade to GPT-4, enhancing the chatbots utility as a tool. ChatGPT now supports multimodal input, allowing it to receive data via text and graphics and produce replies in many languages.

Some intriguing tweets indicate that ChatGPT-4 has spontaneously addressed ChatGPT-5, even when not pushed to do so. This begs the issue of what ChatGPT-4 is aware of regarding the next version that we are not. Furthermore, GPT-4 has shown exam-taking abilities that outperform those of its predecessor. One of the developers, Siqi Chen, stated on Twitter that GPT-5 will complete its training by December, with OpenAI expecting it to attain AGI.

Whether or not GPT-5 achieves AGI, it should offer major enhancements above GPT-4, which was already a huge advance for ChatGPT. Its impossible to foresee the entire scope of these enhancements, but the chatbot might allow different input modalities and produce faster and more accurate answers. While the possibility of ChatGPT improving and achieving AGI is intriguing, we must equally examine the potential negative implications.

This revelation is expected to ignite a heated argument about whether GPT-5 has genuinely reached AGI, and given the nature of such disputes, it is quite likely that it will be considered to have gained AGI. It implies that with the aid of GPT-5, Generative AI might achieve human-like indistinguishability. Chen further emphasized on Twitter that, while reaching AGI with GPT-5 is not a majority opinion within OpenAI, some people feel it is doable. If Artificial Intelligence achieves AGI, it will have intellectual and task comprehension abilities equivalent to humans.

It is impossible to forecast what these negative consequences would be, just as it is difficult to envision the good consequences of ChatGPT attaining AGI. Despite this ambiguity, there is no need to be concerned about a sci-fi movie scenario in which AI takes over. However, the growth of AI has already prompted worries at Europol, since criminals are exploiting the capabilities of non-AGI versions of ChatGPT for illicit purposes. Before the introduction of GPT-5, we may see an interim version of ChatGPT.

Read more here:

Towards Artificial General Intelligence, ChatGPT 5 is on Track - Analytics Insight

AI Singapore and the Digital and Intelligence Service Sign … – MINDEF Singapore

Senior Minister of State for Defence Mr Heng Chee How officiated the inaugural AI Student Developer Conference at the Lifelong Learning Institute today. Organised by AI Singapore (AISG) and attended by more than 300 participants, the conference allowed attendees to gain insights into Artificial Intelligence (AI) and the AI industry through panel discussions, interactive booths and workshops, as well as explore career opportunities with industry partners. As part of the conference, a Memorandum of Understanding (MOU) between AISG and the Singapore Armed Forces (SAF)'s Digital and Intelligence Service (DIS) was signed.

Delivering the opening address at the conference, Mr Heng said that, "Outside of the defence-specific sector partners, DIS is also enlarging its engagement with the wider technology ecosystem, including engagement with the commercial sector and academia This MOU is another example of DIS's pursuit in this direction of engagement, augmenting our ongoing efforts to build and sustain a strong and capable workforce and talent pipeline to strengthen and sharpen the SAF's digital cutting edge."

The MOU between AISG and the DIS was signed by Head of LearnAI at AISG, Mr Koo Sengmeng and DIS Chief Digitalisation Officer Military Expert 7 (ME7) Guo Jinghua. Senior Director of AI Governance at AISG, Prof Simon Chesterman and Chief of Digital and Intelligence Service/Director Military Intelligence, Brigadier-General Lee Yi-Jin witnessed the signing of the MOU, which formalises the collaboration in deepening national AI expertise for Singapore's digital defence.

The MOU will further collaboration and strengthen the DIS's capability development in Data Science and AI (DSAI). The DIS will need to keep pace with, and agilely harness the rapid pace of AI innovation in academia and industry, to complement the strong AI capabilities of the Defence Technology Community. This is crucial for the DIS to better exploit the vast and growing volume of data in the digital domain, and effectively detect and respond to the increasing digital threats facing Singapore and Singaporeans. The DIS will leverage AISG's industry and talent development programmes including the 100 Experiments (100E) and AI Apprenticeship Programme (AIAP) to expand the DIS's capacity to deploy advance AI techniques, such as the use of Large Language Models and Reinforcement Learning, and integrate them into operations of the DIS and the SAF.

The DIS will also work with AISG to develop and expand its workforce. Through the introduction of AISG's LearnAI courses, the DIS will expand its course offerings for DIS personnel's professional upskilling. The DIS will also leverage AISG's existing networks of students to sustain the DSAI talent pipeline, while supporting AISG's mandate of growing and developing a national digital workforce. The DIS will enable national talents in AISG's AIAP, who are undergoing AI deep-skilling, to contribute to national defence via their involvement in the various projects supporting the DIS. The DIS will also offer employment opportunities to these talents where suitable. In addition, AISG will share about National Service (NS) and career opportunities in the DIS, such as the Digital Work-Learn Scheme[1], with students from the AISG Student Outreach Programme.

Highlighting the importance of the MOU for Singapore's digital defence, Mr Koo said, "Our partnership with the DIS will ensure that Singapore has a robust and resilient pipeline of AI talents that have knowledge of issues related to national defence and possess the relevant expertise to protect our digital borders and safeguard Singapore. We look forward to working closely with the DIS to collectively deepen the core competencies of our next-generation Singapore Armed Forces to stay ahead of the threats of tomorrow."

ME7 Guo said, "The DIS and AISG are working towards our common goal of strengthening digital capabilities to safeguard Singapore. The effective use of AI is crucial for the SAF's mission success. We need to better reap the dynamic AI innovations in academia and industry, and integrate them into SAF operations. Our partnership with AISG is therefore an important part of our approach to leverage cutting-edge AI innovations. Beyond AI capability development, our partnership with AISG will help grow the DIS digital fighting force to defend Singapore in the digital domain, and contribute to the national AI talent pipeline through various schemes as the Digital Work-Learn Scheme."

[1]Servicemen under the WLS will serve for four years as Digital Specialists in the SAF, in a combination of full-time National Service and Regular service, developing data science, software development and AI skills through vocational, on-the-job and academic training.

About AI Singapore

AI Singapore (AISG) is a national AI programme launched by the National Research Foundation (NRF), Singapore to anchor deep national capabilities in artificial intelligence (AI) to create social and economic impacts through AI, grow the local talent, build an AI ecosystem, and put Singapore on the world map.

AISG brings together Singapore-based research institutions and the vibrant ecosystem of AI start-ups and companies developing AI products to perform applications-inspired research, grow the knowledge, create the tools, and develop the talent to power Singapore's AI efforts.

AISG is driven by a government-wide partnership comprising NRF, the Smart Nation and Digital Government Office (SNDGO), Economic Development Board (EDB), Infocomm Media Development Authority (IMDA), SGInnovate, and the Integrated Health Information Systems (IHiS).

Details of some of its programmes can be found below:

-100 Experiments (100E)

-AI Apprenticeship Programme (AIAP)

-LearnAI

For more information on AISG and its programmes, please visit: http://www.aisingapore.org

AI Singapore's Social Media Channels:

Facebook: https://www.facebook.com/groups/aisingapore

Instagram: @ai_singapore

LinkedIn: https://www.linkedin.com/company/aisingapore/

Twitter: https://twitter.com/AISingapore

About The DIS

As part of the transformation of the Next Generation SAF, the Digital and Intelligence Service, the fourth Service of the Singapore Armed Forces (SAF) was established in 2022. The DIS sees the consolidation and integration of existing Command, Control, Communications, Computers and Intelligence (C4I) as well as cyber capabilities of the SAF. As a dedicated Service, the DIS will raise, train and sustain digital forces and capabilities to fulfil its mission to defend the peace and security of Singapore from the evolving and increasingly complex threats in the digital domain.

The mission of the DIS is to defend and dominate in the digital domain. As part of an integrated SAF, the DIS will enhance Singapore's security, from peace to war. The DIS plays a critical role in defending Singapore from threats in the digital domain, and allows the SAF to operate better as a networked and integrated force to deal with a wider spectrum of external threats to enhance and safeguard Singapore's peace and sovereignty. The DIS collaborates with partners across the MINDEF, SAF, Whole-of-Government agencies and like-minded partners in academia and industry in defending our nation against threats in the digital domain.

Building a highly-skilled digital workforce is key to the digital defence strategy of the SAF. The DIS continually attracts and develops both military and non-uniformed digital experts to grow the SAF's digital workforce.

The DIS leverages our National Servicemen to develop its digital workforce. Operationally Ready National Servicemen (ORNS) with matching talents and relevant civilian expertise may also express interest to serve in the DIS through the Enhanced Expert Deployment Scheme (EEDS). Full-time National Servicemen (NSFs) with suitable skills are offered to participate in DIS-related Work-Learn Schemes (WLS) where they will be able to undergo military training and serve NS while attaining academic credits which will contribute to the eventual completion of a relevant university degree. There are currently two DIS WLS, namely the Digital WLS and Cyber WLS.

For more information on the DIS and its careers, please visit: http://www.mindef.gov.sg/dis

The Digital and Intelligence Service's Social Media Channels:

Facebook: https://www.facebook.com/thesingaporeDIS

Instagram: @thesingaporedis

LinkedIn: https://www.linkedin.com/company/digital-and-intelligence-service

Twitter: @thesingaporeDIS

Go here to read the rest:

AI Singapore and the Digital and Intelligence Service Sign ... - MINDEF Singapore

GPT-4 Passes the Bar Exam: What That Means for Artificial … – Stanford Law School

CodexThe Stanford Center for Legal Informatics and the legal technology company Casetext recently announced what they called a watershed moment. Research collaborators had deployed GPT-4, the latest generation Large Language Model (LLM), to takeand passthe Uniform Bar Exam (UBE). GPT-4 didnt just squeak by. It passed the multiple-choice portion of the exam and both components of the written portion, exceeding not only all prior LLMs scores, but also the average score of real-life bar exam takers, scoring in the 90th percentile.

Casetexts Chief Innovation Officer and co-founder Pablo Arredondo, JD 05, who is a Codex fellow, collaborated with Codex-affiliated faculty Daniel Katz and Michael Bommarito to study GPT-4s performance on the UBE. In earlier work, Katz and Bommarito found that a LLM released in late 2022 was unable to pass the multiple-choice portion of the UBE. Their recently published paper, GPT-4 Passes the Bar Exam quickly caught the national attention. Even The Late Show with Steven Colbert had a bit of comedic fun with the notion of robo-lawyers running late-night TV ads looking for slip-and-fall clients.

However for Arredondo and his collaborators, this is serious business. While GPT-4 alone isnt sufficient for professional use by lawyers, he says, it is the first large language model smart enough to power professional-grade AI products.

Here Arredondo discusses what this breakthrough in AI means for the legal profession and for the evolution of products like the ones Casetext is developing.

What technological strides account for the huge leap forward from GPT-3 to GPT-4 with regard to its ability to interpret text and its facility with the bar exam?

If you take a broad view, the technological strides behind this new generation of AI began 80 years ago when the first computational models of neurons were created (McCulloch-Pitts Neuron). Recent advancesincluding GPT-4have been powered by neural nets, a type of AI that is loosely based on neurons and includes natural language processing. I would be remiss not to point you to the fantastic article by Stanford Professor Chris Manning, director of the Stanford Artificial Intelligence Laboratory. The first few pages provide a fantastic history leading up to the current models.

You say that computational technologies have struggled with natural language processing and complex or domain-specific tasks like those in the law, but with advancing capabilities of large language modelsand GPT-4you sought to demonstrate the potential in law. Can you talk about language models and how they have improved, specifically for law? If its a learning model, does that mean that the more this technology is used in the legal profession (or the more it takes the bar exam) the better it becomes/more useful it is to the legal profession?

Large language models are advancing at a breathtaking rate. One vivid illustration is the result of the study I worked on with law professors and Stanford CodeX fellows Dan Katz and Michael Bommarito. We found that while GPT-3.5 failed the bar, scoring roughly in the bottom 10th percentile, GPT-4 not only passed but approached 90th percentile. These gains are driven by the scale of the underlying models more than any fine-tuning for law. That is, our experience has been that GPT-4 outperforms smaller models that have been fine-tuned on law. It is also critical from a security standpoint that the general model doesnt retain, much less learn from, the activity and information of attorneys.

What technologies are next and how will they impact the practice of law?

The rate of progress in this area is remarkable. Every day I see or hear about a new version or application. One of the most exciting areas is something called Agentic AI, where the LLMs (large language models) are set up so that they can themselves strategize about how to carry out a task, and then execute on that strategy, evaluating things along the way. For example, you could ask an Agent to arrange transportation for a conference and, without any specific prompting or engineering, it would handle getting a flight (checking multiple airlines if need be) and renting a car. You can imagine applying this to substantive legal tasks (i.e., first I will gather supporting testimony from a deposition, then look through the discovery responses to find further support, etc).

Another area of growth is mutli-modal, where you go beyond text and fold in things like vision. This should enable things like an AI that can comprehend/describe patent figures or compare written testimony with video evidence.

Big law firms have certain advantages and I expect that they would want to maintain those advantages with this sort of evolutionary/learning technology. Do you expect AI to level the field?

Technology like this will definitely level the playing field; indeed, it already is. I expect this technology to at once level and elevate the profession.

So, AI-powered technology such as LLMs can help to close the access to justice gap?

Absolutely. In fact, this might be the most important thing LLMs do in the field of law. The first rule of the Federal Rules of Civil Procedure exhorts the just, speedy and inexpensive resolution of matters. But if you asked most people what three words come to mind when they think about the legal system, speedy and inexpensive are unlikely to be the most common responses. By making attorneys much more efficient, LLMs can help attorneys increase access to justice by empowering them to serve more clients.

Weve read about AIs double-edged sword. Do you have any big concerns? Are we getting close to a Robocop moment?

My view, and the view of Casetext, is that this technology, as powerful as it is, still requires attorney oversight. It is not a robot lawyer, but rather a very powerful tool that enables lawyers to better represent their clients. I think it is important to distinguish between the near term and the long term questions in debates about AI.

The most dramatic commentary you hear (e.g., AI will lead to utopia, AI will lead to human extinction) is about artificial general intelligence (AGI), which most believe to be decades away and not achievable simply by scaling up existing methods. The near term discussion, about how to use the current technology responsibly, is generally more measured and where I think the legal profession should be focused right now.

At a recent workshop we held at CodeXs FutureLaw conference, Professor Larry Lessig raised several near-term concerns around issues like control and access. Law firm managing partners have asked us what this means for associate training; how do you shape the next generation of attorneys in a world where a lot of attorney work can be delegated to AI? These kinds of questions, more than the apocalyptic prophecies, are what occupy my thinking. That said, I am glad we have some folks focused on the longer term implications.

Pablo Arredondo is a Fellow at Codex The Stanford Center for Legal Informatics and the co-founder of Casetext, a legal AI company. Casetexts CoCounsel platform, powered by GPT-4, assists attorneys in document review, legal research memos, deposition preparation, and contract analysis, among other tasks. Arredondos work at Codex focuses on civil litigation, with an emphasis on how litigators access and assemble the law. He is a graduate of Stanford Law School, JD 05, and of the University of California at Berkeley.

Read more here:

GPT-4 Passes the Bar Exam: What That Means for Artificial ... - Stanford Law School

Is artificial intelligence approaching science fiction? – The Trail – The Puget Sound Trail

By Veronica Brinkley

As AI models have advanced, it has become increasingly evident that they will play an important role in the future of humankind. Current models have a range of capabilities that are supposedly designed to aid humans. For instance, photo-generating models like MidJourney and DALL-E have the ability to create images in a multitude of styles, based on the users prompt. These programs outputs have become increasingly accurate to the point that, to the average viewer, they are often indiscernible from real photographs.

The language model ChatGPT is garnering the most media attention. ChatGPT is advancing rapidly; its already on its fourth version. The lab behind it, OpenAI, has stated that the company is building towards an ambitious goal artificial general intelligence (AGI), their term for an AI that is as smart as, if not smarter than, the average human.

These developments have raised alarms in the tech community. Recently, over 1,000 major tech executives including Elon Musk professors, and scientists signed an open letter directed toward OpenAI, requesting an immediate sixmonth pause in artificial intelligence development. The main concern posited by the letter is that AI systems with human-competitive intelligence can pose profound risks to society and humanity, and therefore necessitate governmental regulation. The letter went on to say that AI development labs are locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one not even their creators can understand, predict, or reliably control.

OpenAI, for its part, says its technology will improve society. According to their website, advancements in AI could help us elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility. To me, this just sounds like a lot of buzzwords and it doesnt really say much about what they intend to do with their product.

Now, if youre like me, youre probably thinking, I swear Ive seen this in a movie, and it did not end well. Its scary to see the beginning of the march to machine intelligence. Science fiction centered around AI previously felt abstract, but now seems potentially accurate. My personal favorite example is the dystopian sci-fi video game Detroit: Become Human.

Quantic Dreams Detroit: Become Human is set in the not-so-distant future of 2038, in an America where highly-developed androids have become commonplace. The economy is completely dependent on them as the means of production. However, the androids begin to gain sentience and deviate from their programming. This sparks a civil rights movement of deviant androids. A power struggle ensues, as humans refuse to accept androids as autonomous beings. While current AI is far from this reality, it is a chilling projection of what could be in store. The game itself directly comments on this reality in its opening lines: Remember, this isnt just a game, its our future.

Drawing parallels between the story of Detroit: Become Human and our current social trajectory is hardly difficult. CyberLife, the AI research and development firm in the games setting, represents a potential future for OpenAI. In the game, CyberLife has become the standard for androids and therefore holds an immensely disproportionate amount of power over the economy and ruling bodies. Perhaps we arent as far away from this reality as we think. In order to prevent such a future, industries need to change to accommodate AI, a technology that is only growing faster, smarter and more powerful. This is where the government must step in. It must regulate the creation of AI very consciously.

In a perfect world, state leaders consciously and unerringly regulate the creation of AI, acting free from considerations of profit and power. However, as we know, the government doesnt have a great history of neutrality or altruism. And few in Washington even understand technology just watch the Congressional hearings on Facebook or TikTok for examples. Washington has already failed to stay ahead of tech decisions that affect millions. OpenAI has been seemingly honest about these concerns, stating, we hope for a global conversation about three key questions: how to govern these systems, how to fairly distribute the benefits they generate, and how to fairly share access. While its great that they hope for this outcome, I do not believe that hoping is enough. Improper handling and regulation could have catastrophic effects on society, as seen in the manipulation and misuse of current social media. Detroit: Become Human may not be far off.

As college students who are just beginning to enter the job market, these advancements could easily affect us in the near future. Entry-level positions might be displaced by AI and it may become increasingly difficult for us to find meaningful work. In terms of possible issues with the use of this technology, this is just the tip of the iceberg.

So many of us grew up watching events like this happen in the movies and on TV, and it is hard to believe what was once science fiction is beginning to exist. Its also alarming to watch it unfold, knowing it could impact our futures. But we are not helpless. We can stay informed about technological advancements. We can slow their deployment until the ramifications are understood. We can apply the lessons learned from fictional media and the real-life corruption of social media. And we must consider enacting accompanying regulations on tech industries. We should be mindful that technology is not always a wonder especially in the hands of mere mortals.

Visit link:

Is artificial intelligence approaching science fiction? - The Trail - The Puget Sound Trail

Unpacking AI: "an exponential disruption" with Kate Crawford: podcast and transcript – MSNBC

View this graphic on msnbc.com

You might be feeling that artificial intelligence is starting to seem a bit like magic. Our guest this week points out that AI, once the subject of science fiction, has seen the biggest rise of any consumer technology in history and has outpaced the uptake of TikTok, Instagram and Facebook. As we see AI becoming more of an everyday tool, students are even using chatbots like ChatGPT to write papers. While automating certain tasks can help with productivity, were starting to see more examples of the dark side of the technology. How close are we to genuine external intelligence? Kate Crawford is an AI expert, research professor at USC Annenberg, honorary professor at the University of Sydney and senior principal researcher at Microsoft Research Lab in New York City. Shes also author of Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Crawford joins WITHpod to discuss the social and political implications of AI, exploited labor behind its growth, why she says its neither artificial nor intelligent, climate change concerns, the need for regulation and more.

Note: This is a rough transcript please excuse any typos.

Kate Crawford: We could turn to OpenAI's own prediction here, which is that they say 80 percent of jobs are going to be automated in some way by these systems. That is a staggering prediction.

Goldman Sachs just released a report this month saying 300 jobs in the U.S. are looking at, you know, very serious forms of automation impacting what they do from day-to-day. So, I mean, it's staggering when you start to look at these numbers, right?

So, the thing that I think is interesting is to think about this historically, right? We could think about the Industrial Revolution. It takes a while to build factory machinery and train people on how things work.

We could think about the transformations that happened in the sort of early days of the personal computer. Again, a slow and gradual rollout as people began to incorporate this technology. The opposite is happening here.

Chris Hayes: Hello and welcome to "Why Is This Happening?" with me, your host, Chris Hayes. There's a famous Arthur C. Clarke quote that I think about all the time. He was a science fiction writer and futurist and he wrote a book called "Profiles of the Future: An Inquiry into the Limits Possible" and this quote, which you probably have caught at one point or another is that, "Any sufficiently advanced technology is indistinguishable from magic."

And there's something profound about that. I remember the first time that, like, I saw Steve Jobs do the iPhone presentation. And then, the first one I held in my hand, it really did feel like magic. It felt like a thing that formerly wasn't possible, that I knew what the sort of laws of physics and technology were and this thing came along and it seemed to break them, so it felt like magic.

I remember feeling that way the first time that I really started to get on the graphical version of the internet. Even before that when I got on the first version of the internet. Like, oh, I have a question about a thing. You know, this baseball player Rod Carew, what did he hit in his rookie season? Right away, right? Magic. Magically, it appears in front of me.

And I think a lot of people have been having the feeling about AI recently. There's a bunch of new, sort of public-facing, machine learning, large language model pieces of software. One is ChatGPT, which I've been messing around with.

There's others for images. One called Midjourney and a whole bunch of others. And youve probably seen the coverage of this because it seems like in the last two months it's just gone from, you know, nowhere and people talk about AI and the algorithm machine learning tool, like, holy smokes.

And I got to say, like, we're going to get into the ins and outs of this today. But at the sort of does it feel like magic level, like, it definitely feels like magic to me.

I went to ChatGPT. I was messing around with it. I told it to write a standup comedy routine in the first person of Ulysses S. Grant about the Siege of Vicksburg using, like, specific details from the battle and it came back with, like, you know, I had to hide my soldiers the way I hide the whiskey from my wife, which is like, you know, he was, you know, notoriously had a drinking problem although tended to not around his wife. So, it was, like, slightly off that way.

But it was like a perfectly good standup routine about the Siege of Vicksburg in the first person of Ulysses S. Grant, and it was done in five seconds. Obviously, we're going to get into all sorts of, you know, I don't think it's going to be like taking over for us, but the reason it felt like magic to me is I know enough about computers and the way they work that I can think through like when my iPhone's doing something, when I'm swiping, I can model what's happening.

Like, there's a bunch of sensors in the actual phone. Those sensors have a set of programming instructions to receive the information of a swipe and then compare it against a set of actions and figure out which one is closest to and then do whatever the command is.

And, you know, I've programmed before, and I can reason out what it's doing. I can reason out what, like, my car is doing. I understand basically how an internal combustion engine works and, you know, the pistons. And I just have no idea what the hell is happening inside this thing that when I told it to do this, it came back with something that seemed like the product of human intelligence. I know it's not. We're going to get into all of it, but it's like it does seem to me like a real step change.

You know, a lot of people feel that way. Now, it so happens that this is something that I studied as an undergraduate and thought a lot about. And there's a long literature about artificial intelligence and human intelligence and we're going to get into all that today.

But because this is so front-of-mind, because this is such an area of interest for me, I'm really delighted to have on today's program Kate Crawford. This is Kate Crawford's lifes work. She's an artificial intelligence expert. She studies the social and political implications of AI.

She's a Research Professor at USC Annenberg, Honorary Professor at University of Sydney, Senior Principal Researcher at Microsoft Research Lab in New York City.

She's the author of "Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence." A lot of the things that I think have exploded onto public consciousness in the last few months have been the subject of work that she's been thinking about and doing for a very long time.

So, Kate, it's great to have you in the program.

Kate Crawford: Thanks for having me, Chris.

Chris Hayes: Does it feel like magic to you?

Kate Crawford: I'll be honest. There is definitely the patina of magic. There's that feeling of how is this happening. And to some degree, you know, I've been taken aback at the speed by which we've gotten here. I think anybody who's been working in this field for a long time will tell you the same thing.

Chris Hayes: Oh, really? This feels like a step change to you --

Kate Crawford: Oh, yeah.

Chris Hayes: -- like we're in a new --

Kate Crawford: Yeah. This feels like an inflection point, I would say, even bigger than a step function change. We're looking at --

Chris Hayes: Right.

Kate Crawford: -- a shift that I think is pretty profound and, you know, a lot of people use the iPhone example or the internet example. I like to go even further back. I like to think about the invention of artificial perspective, so we can go back into the 1400s where you had Alberti outline a completely different way of visualizing space, which completely transformed art and architecture and how we understood the world that we lived in.

You know, it's been described as a technology that shifted the mental and material worlds of what it is to be alive. And this is one of those moments where it feels like a perspectival shift that can feel magic. But I can assure you, it is not magic and thats --

Chris Hayes: No, I know --

Kate Crawford: -- where it gets interesting.

Chris Hayes: OK. I know it's not. I'm just being clear. Obviously, I know it's not magic. And also, I actually think the Arthur C. Clarke quote is interesting because there's two different meanings, right?

So, it feels like magic in the sense of, like, things that are genuine magic, right, that in a fantastical universe, they're miracles, right? Or it feels like magic in that, like, when you're around an incredible magician, you know that the laws of physics haven't been suspended but it sure as heck feels like it, right?

Kate Crawford: Oh, yeah.

Chris Hayes: And that's how this feels to me. Like, I understand that this is just, you know, a probabilistic large language learning model that then we'll get into how this is working. So, I get that.

But it sure as heck on the outcome line, you know, feels like something new. The perspectival shift is a really interesting idea. Why does that analogy draw you?

Kate Crawford: Well, let's think about these moments of seeming magic, right? So, there is just decades of examples of this experience. And in fact, we could go all the way back to the man who invented the first chatbot. This is Joseph Weizenbaum. And in the 1960s when he's at MIT, he creates a system called ELIZA. And if you're a person of a certain age, you may remember when ELIZA came out. It's really simple, kind of almost set of scripts that will ask you questions and elicit responses and essentially have a conversation with you.

So, writing in (ph) the 1970s, Weizenbaum was shocked that people were so easily taken in by this system. In fact, he uses a fantastic phrase around this idea that there is this powerful delusional thinking that is induced in otherwise normal people the minute you put them in front of a chatbot.

We assume that this is a form of intelligence. We assume that the system knows more than it does. And, you know, the fact that he captured that in this fantastic book called "Computer Power and Human Reason" back in 1978, I think, hasnt changed that that phenomenon, when we open up ChatGPT, you really can get that sense of, OK, this is a system that really feels like I'm talking to, at least if not a person, a highly-evolved form of computational intelligence.

And I think what's interesting about this perspectival shift is that, honestly, this is a set of technologies that have been pretty well known and understood for some time. The moment of change was the minute that OpenAI put it into a chat box and said, hey, you can have a conversation with a large language model.

That's the moment people started to say this could change every workplace, particularly white-collar workplaces. This could change the whole way that we get information. This could change the way we understand the world because this system is giving you confident answers that can feel extremely plausible even when they make mistakes, which they--

Chris Hayes: Yes.

Kate Crawford: -- frequently do.

Chris Hayes: So, I mean, part of that, too, is like, you know, humans see faces in all kinds of places where there arent faces, right? We project inner lives onto our pets. You know, we have this drive to mentally model other consciousnesses, partly because of the intensely inescapable social means by which we evolved.

So, part of it is in the same way that magicians taking advantage of certain parts of our perceptual apparatus, right, like we're easily distracted by, like, loud motions, right? It's doing that here with our desire to impute consciousness in the same way that, like, we have a whole story about what's going on in a dog's mind when it gets out into the park.

Kate Crawford: Exactly.

Chris Hayes: But, like, I'm not sure it's correct.

Kate Crawford: That is it. And I actually think the magician's trick analogy is the right one here because it operates on two levels. First, we're contributing half of the magic by bringing those, you know, anthropomorphic assumptions into the room and by playing along.

We are literally training the AI model with our responses. So, when it says something and we say, oh, that's great. Thanks. Could I have some more? Thats a signal to the system this was the correct answer.

If you say, oh, that doesn't seem to match up, then it takes that as a negative --

Chris Hayes: Right.

Kate Crawford: -- signal. So, we are literally training these systems with our own intelligence. But there's another way we could think about this magician's trick because while this is happening and while our focus is on, oh, exciting LLMs, there's a whole other set of political and social questions that I think we need to be asking that often get deemphasized.

Chris Hayes: There's a few things here. There's the tech, there's the kind of philosophy, and then there's the, like, political and social implication.

So, just start on the tech. Let's go back to the chatbot you're talking about before, ELIZA. So, there's a bunch of things happening here in a chatbot like ChatGPT that are worth breaking down.

The first is just understanding natural language and, you know, I did computer science as an undergraduate and philosophy and philosophy of mind and some linguistics when I was an undergraduate 25 years ago. And at that time, like, natural language processing was a huge unsolved problem.

You know, we all watched "Star Trek". Computer, give me this. And it's like getting that computer understand a simple sentence is actually, like, wildly complex as a computational problem. We all take it for granted, but it seems like even before you get into what it's giving you back, I mean, now, it's embedded in our lives, Siri, all this stuff.

Like how did we crack that? Is there a layperson's way to explain how we cracked natural language processing?

Kate Crawford: I love the story of the history of how we got here because it gives you a real sense of how that problem has been, if not cracked, certainly seriously advanced. So, we could go back to the sort of prehistory of AI. So, I think sort of 1950s, 1960s.

The idea of artificial intelligence then was something called knowledge-based AI or an expert systems approach. The idea of that was that to get a computer to understand language, you had to teach it to understand linguistic principles, high-level concepts to effectively understand English like the way you might teach a child to understand English by thinking about the principles and thinking about, you know, here's why we use this sort of phrasing, et cetera.

Then something happens in around the 1970s and early 1980s, a new lab is created at IBM, the continuous-speech recognition lab, the CSR lab. And this lab is fascinating because a lot of key figures in AI are there, including Robert Mercer who would later become famous as the, shall we say, very backroom-operator billionaire who funded people like Bannon and the Trump campaign.

Chris Hayes: Yup.

Kate Crawford: Yes, and certainly, the Brexit campaign.

Chris Hayes: Yup.

Kate Crawford: So, he was one of the members of this lab that was headed by Professor Jelinek, and they had this idea. They said instead of teaching computers to understand, let's just teach them to do pattern recognition at scale.

Essentially, we could think about this as the statistical turn, the moment where it was less about principles and more about patterns. So, how do you do it? To teach that kind of probabilistic pattern recognition, you just need data. You need lots and lots and lots of linguistic data, just examples.

And back then, even in the, you know, 1980s, it was hard to get a corpus of data big enough to train a model. They tried everything. They tried patents. They tried, you know, IBM technical manuals, which, funnily enough, didn't sound like human speech. They tried children's books.

And they didn't get a corpus that was big enough until IBM was actually taken to court. This was like a big antitrust case where it went for years. They had, like, a thousand witnesses called. And in this case, this produces the corpus that they used to train their model. Like honestly, you couldnt make this stuff up. Its wild (ph).

Chris Hayes: Is that right?

Kate Crawford: Oh, absolutely. So, they have a breakthrough which is that it is all about scale. And so interestingly --

Chris Hayes: Right.

Kate Crawford: -- Mercer has this line, you know, which is fantastic. There's a historian of science, Tsao-Cheng Lee (ph) who's written about this moment. But, you know, Mercer says, it was one of the rare moments of government being useful despite itself. That was how --

Chris Hayes: Boo.

Kate Crawford: -- he justified this case, right?

So, we see this changed towards basically it's all about data. So, then we have the years of the internet. Think about, you know, the early 2000s. Everyone's doing blogs, social media appears, and this is just grist to the mill. You can scrape and scrape and scrape and create larger and larger training data sets.

So, that's basically what they call these, foundational data sets, which are used to see these patterns. So, effectively, LLMs are advanced pattern recognizers that do not understand language, but they are looking for, essentially, patterns and relationships between the text that theyve been trained on, and they use this to essentially predict the next word in a sentence. So, that's what they're aimed to do.

Chris Hayes: This statistical turn is such an important conceptual point. I just want to stay on it because I think this, like, really helped. And this turn happened before I was sort of interested in natural language processing. But when we were talking about natural language processing, we're still talking in this old model, right?

Well, you teach kids these rules, right, and you teach them or if you learn a second language, like, you learn verb conjugation, right? And you're running them through these rules, like, OK, that's a first person. There's this category called first person. There's a category called verb then a conjugate. There's category of conjugation. One plus one plus one equals three. That gives me, you know, yo voy (ph). OK.

So, thats this sort of principle, rule-based way of sort of understanding language and natural language processing. So, the statistical turn says throw all that out. Let's just say if someone says thanks what's likely to be the next word?

And you see this in the Gmail auto complete.

Kate Crawford: Yup.

Chris Hayes: When you say thanks and it will light up so much. It's just that thanks so much goes together a lot. So, when you put in thanks, it's like pretty good chance it's going to be so much.

And that general principle of if you run enough data and you get enough probabilistic connections between this and that word at scale, is how you get Ulysses S. Grant doing a joke about Vicksburg and hiding his troops the way he hides whiskey from his wife.

Kate Crawford: Exactly. And you could think about all of the words and that joke is being in a kind of big vector space or word cloud where you'd have Ulysses S. Grant, you'd have whiskey, you'd have soldiers, and you can kind of think about the ways in which they would be related.

And the funny thing is trying to write jokes with GPT, some of the time, it's really good and some of the time, it's just not funny at all because it's not --

Chris Hayes: Right. Sure.

Kate Crawford: -- coming from a basis of understanding humor or language.

Chris Hayes: No.

Kate Crawford: It's essentially doing this very large word association game.

Chris Hayes: Right. OK. So, I understand this principle. Like I get it. It's a probabilistic model that is trained on a ton of data and because it's trained on so much data and because it's using a cycle amount of processing power.

Kate Crawford: Oh, yes.

Chris Hayes: Like a genuinely crazy and, like, expensive and carbon intensive. So like, it's like running a car like a huge Mack truck, right?

Kate Crawford: Oh, yeah.

Chris Hayes: It's working its butt off to give me this, my dumb little Vicksburg joke. So, like, I get that intuitively, but maybe, like, if we could just go to the philosophy place, its like, OK, it doesn't understand. But then we're at this question of, like, all right, well what does understanding mean, right?

Kate Crawford: Right.

Chris Hayes: And this is where we start to get into this sort of philosophical AI question. And there's a long line here. There's Alan Turing's Turing test, which means we should explain to folks who don't know that. There's John Searle's Chinese box example, which we should also probably take a second.

But basically, for a long time, this question of, like, what does understanding mean? And if you encountered an intelligence that acted as if it were intelligent, at what point would you get to say it's intelligent without peering into what it's doing on the inside to produce the thing that makes it seem intelligent.

And the Turing test, is Alan Turing, the brilliant British mathematician, basically says, if you can interact with a chatbot that fools you, that's intelligence. And it just feels like, OK, well, ChatGPT, I think, is passing it. It feels like it passes the Turing test at least in some circumstances, yes?

More:

Unpacking AI: "an exponential disruption" with Kate Crawford: podcast and transcript - MSNBC

Is ‘Generative’ AI the Way of the Future? – Northeastern University

Ever since the 20th centurys earliest theories of artificial intelligence set the world on an apparently irreversible track toward the technology, the great promise of AIone thats been used to justify that march forwardis that it can help usher in social transformation and lead to human betterment.

With the arrival of so-called generative AI, such as OpenAIs endlessly amusing and problem-riddled ChatGPT, the decades long slow-roll of AI advancement has felt more like a quantum leap forward. That perceptive jump has some experts worried about the consequences of moving too quickly toward a world in which machine intelligence they say could become an all-powerful, humanity-destroying force la The Terminator.

But Northeastern experts, including Usama Fayyad, executive director for the Institute for Experiential Artificial Intelligence, maintain that those concerns dont reflect reality. That, in fact, AI is being integrated in ways that promote and necessitate human involvementwhat experts have coined human-in-the-loop.

On Tuesday, April 25, Northeastern will host a symposium of AI experts to discuss a range of topics related to the pace of AI development, and how progress is reshaping the workplace, education, health care and many other sectors.Northeastern Global News sat down with Fayyad to learn more about what next weeks conference will take up; the upside of generative AI; as well as broader developments in the space. The conversation has been edited for brevity and clarity.

Generative AI refers to the kind of AI that can, quite simply, generate outputs. Those outputs could be in the form of text like you see in what we call the large language models, such as ChatGPT (a chatbot on top of a large language model), or images, etc. If you are training [the AI] on text, text is what you will get out of it. If you are training it on images, you get images, or modifications of images, out of it. If you are training it on sounds or music, you get music out of it. If you train it on programming code, you get programs out, and so on.

Its also called generative AI because the algorithms have the ability to generate examples on their own. Its part of their training. Researchers would do things like have the algorithm challenge itself through generative adversarial networks, or algorithms that generate adversarial examples that could confuse the system to help strengthen its training. But since their development, researchers quickly realized that they needed human intervention. So most of these systems, including ChaptGPT, actually use and require human intervention. Human beings facilitate a lot of these challenges as part of the training through something called reinforcement learning, a machine learning technique designed to basically improve the systems performance.

We are seeing it applied in educationin higher education in particular. Higher education has taken noteincluding Northeastern, in a very big wayof the fact that these technologies have challenged the way we conduct, for example, standardized testing. Educators have realized that this is just another tool. At Northeastern we have many examples that we will cover in this upcoming workshop of people using it in the classroom. Be it in the College of Arts, Media and Design for things like [Salvador] Dal and LensaAI for images; or be it in writing classes, English classes, or in engineering.

Just like we transitioned from the slide ruler to the calculator, to the computer and then the whole web on your mobile phonethis is another tool, and the proper way to train our students to be ready for the new world is to figure out ways to utilize this technology as a tool.

Its too early to see real world applications at large scale. The technology is too new. But there are estimates that anywhere from 50-80%Im more in the 80% campof the tasks done by a knowledge worker can be accelerated by this technology. Not automated, accelerated. If youre a lawyer and drafting an agreement, you can have a first draft customized very quickly; but then you have to go in and edit or make changes. If youre a programmer, you can turn out an initial program. But it typically wont work well; it will have errors; its not customized to the target. Again, a human being, provided they understand what theyre doing, can go in and modify it, and save themselves 50-80% of the effort.

Its acceleration, not automation because we know the technology can hallucinate in horrible ways, in fact. It can make up stuff; it can try to defend points of view that you ask it to defend; you can make it lie, and you can lie to it and have it believe you.

They call this specific class of technology stochastic parrots, meaning parrots that havelets sayrandom variation. And I like the term parrots because it correctly describes the fact that they dont understand what theyre saying. So they say stuff, and the stuff may sound eloquent, or fluid. Thats one of the big points that we try to make: somehow we have learned in society to associate intelligence with eloquence and fluiditybasically someone who says things nicely. But in reality these algorithms are far from intelligent; they are basically doing autocomplete; they are repeating things theyve seen before, and often they are repeated incorrectly.

Why do I say all of this? Because it means you have a human-in-the-loop needed in doing this work, because you need to check all of this work. You remove a lot of the repetitive monotonous workthats great. You can accelerate itthats productive. You now can spend your time adding value instead of repeating the boring tasks. All of that I consider positive.

I like to use accounting as a good analogy. What did accounting look like 60-70 years ago? Well, you had to deal with these big ledgers; you had to have nice handwriting; you had to have good addition skills in your head; you had to manually verify numbers and go over sums and apply ratios. Guess what? None of those tasksnone, zeroare relevant today. Now, have we replaced accountants because weve now replaced everything they used to do with something that is faster, better, cheaper, repeatable? No. We actually have more accountants today than in the history of humanity.

What were doing with this workshop is were trying to cover the three areas that matter. What is the impact of ChatGPT and generative AI in the classroom, and how should we use it? We bring in folks who are doing this work at Northeastern to provide examples in one panel.

Second, how is the nature of work changing because of these technologies? That will be addressed during another panel where we think about different business applications. We will use the law and health care as the two running examples here.

The third panel is all about responsible use. How does one look out for the ethical traps, and how does one use this technology properly? We start the whole workshop by having one of our faculty members give an overview of what this technology is to help demystify the backbox, if you will.

The idea, basically, is to show that not only are we (Northeastern) aware of the technological developments taking place, but that we have some of the top experts in the world leading the way. And we are already using this stuff in the classroom as of last semester. Additionally, we want to communicate that were here and ready to work with companies, with organizations to learn ways to best utilize this technologyand to do so properly and responsibly.

Theres plenty of evidence now known that ChatGPT has a human-in-the-loop component. Sometimes humans are answering questions, especially when the algorithm gets in trouble. They review the answers and intervene. By the way, this is run-of-the-mill stuff for even Google Search engine. Many people dont know that when they use the Google Search engine, that the MLR, or the machine learning relevance algorithm that decides which page is relevant to which querythat gets retrained three or four times a day based primarily on human editorial input. Theres a lot of stuff that an algorithm cannot capturethat the stochastic parrot will never understand.

Those concerns are focusing on the wrong things. Let me say a few things. We did go through a bit of a phase transition around 2015 or 2016 with these kinds of technologies. Take handwriting recognition, for example. It had jumps over the years, but it took about 15 years to get there, with many revisions along the way. Speech recognition: the same thing. It took a long time, then it started accelerating; but it still took some time.

With these large language models, like reading comprehension and language compilation, we see major jumps that happened with the development of these large language models that are trained on these large bodies of literature or text. And by the way, what is not talked about a lot is that OpenAI had to spend a lot of money curating that text; making sure its balanced. If you train a large language model on two documents that have the same content by two different outcomes, how does the algorithm know which one is right? It doesnt. Either a human has to tell it, or it basically defaults to saying, Whatever I see more frequently must be right. That creates fertile ground for misinformation.

Now, to answer your question about this proposed moratorium. In my mind, its a little bit silly in its motivations. Many of the proponents of this come from a camp where they believe were at the risk of an artificial general intelligencethat is very far from true. Were very, very far from even getting close to that. Again, these algorithms dont know what they are doing. Now, we are in this risky zone of misusing it. There was a recent example from Belgium where someone committed suicide after six months of talking to a chatbot that, in the end, was encouraging him to do it. So there are a lot of dangers that we need to contend with. We know there are issues. However, stopping isnt going to make any difference. In fact, if people agreed to stop, only the good actors will; the bad actors continue on. What we need to start to do, again, is emphasize the fact that fluency, eloquence is not intelligence. This technology has limitations; lets demystify them. Lets put it to good use so we can realize what the bad uses are. That way we can learn how they should be controlled.

Tanner Stening is a Northeastern Global News reporter. Email him at t.stening@northeastern.edu. Follow him on Twitter @tstening90.

See more here:

Is 'Generative' AI the Way of the Future? - Northeastern University

AI Dangers Viewed Through the Perspective of Don’t Look Up – BeInCrypto

BeInCrypto explores the potential dangers of Artificial General Intelligence (AGI) by drawing comparisons with the film Dont Look Up. Just as the movie highlights societys apathy towards an impending catastrophe, we explore how similar attitudes could threaten our future as AGI develops.

We examine the chilling parallels and discuss the importance of raising awareness, fostering ethical debates, and taking action to ensure AGIs responsible development.

Dont Look Up paints a chilling scenario: experts struggle to warn the world about an impending disaster while society remains apathetic. This cinematic metaphor mirrors the current discourse on Artificial General Intelligence (AGI).

With AGI risks flying under the radar, many people are questioning why society isnt taking the matter more seriously.

A primary concern in both situations is the lack of awareness and urgency. In the film, the approaching comet threatens humanity, yet the world remains unfazed. Similarly, AGI advancements could lead to disastrous consequences, but the public remains largely uninformed and disengaged.

The film satirizes societys tendency to ignore existential threats. AGIs dangers parallel this issue. Despite advancements, most people remain unaware of AGIs potential risks, illustrating a broader cultural complacency. The medias role in this complacency is also significant, with sensationalized stories often overshadowing the more complex nuances of AGIs implications.

A mix of factors contributes to this collective apathy. Misunderstanding the complexities of AGI, coupled with a fascination for AIs potential benefits, creates a skewed perception that downplays the potential hazards. Additionally, the entertainment industrys portrayal of AI may desensitize the public to the more sobering implications of AGI advancement.

As AI technology evolves, reaching AGI Singularitywhere machines surpass human intelligencebecomes increasingly likely. This watershed moment brings with it a host of risks and benefits, adding urgency to the conversation.

AGI has the potential to revolutionize industries, enhance scientific research, and solve complex global challenges. From climate change to disease eradication, AGI offers tantalizing possibilities.

AGI Singularity may also unleash unintended consequences, as machines with superhuman intelligence could pursue goals misaligned with human values. This disparity underscores the importance of understanding and managing AGIs risks.

Much like the comet in Dont Look Up, AGIs risks carry worldwide implications. These concerns necessitate deeper conversations about potential dangers and ethical considerations.

AGI could inadvertently cause harm if its goals dont align with human values. Despite our best intentions, the fallout might be irreversible, stressing the need for proactive discussions and precautions. Examples include the misuse of AGI in surveillance or autonomous weapons, which could have dire consequences on personal privacy and global stability.

As nations race to develop AGI, the urgency to outpace competitors may overshadow ethical and safety considerations. The race for AGI superiority could lead to hasty, ill-conceived deployments with disastrous consequences. Cooperation and dialogue between countries are crucial to preventing a destabilizing arms race.

While AGI promises vast improvements, it also raises moral and ethical questions that demand thoughtful reflection and debate.

AGI systems may make life-or-death decisions, sparking debates on the ethics of delegating such authority to machines. Balancing AGIs potential benefits with the moral implications requires thoughtful analysis. For example, self-driving cars may need to make split-second decisions in emergency situations, raising concerns about the ethical frameworks guiding such choices.

Artificial intelligence has the potential to widen the wealth gap, as those with access to its benefits gain a disproportionate advantage. Addressing this potential inequality is crucial in shaping AGIs development and deployment. Policymakers must consider strategies to ensure that AGI advancements benefit all of society rather than exacerbate existing disparities.

As AGI systems collect and process vast amounts of data, concerns about privacy and security arise. Striking a balance between leveraging AGIs capabilities and protecting individual rights presents a complex challenge that demands careful consideration.

For society to avoid a Dont Look Up scenario, action must be taken to raise awareness, foster ethical discussions, and implement safeguards.

Informing the public about AGI risks is crucial to building a shared understanding. As awareness grows, society will also be better equipped to address AGIs challenges and benefits responsibly. Educational initiatives, public forums, and accessible resources can play a vital role in promoting informed discourse on AGIs implications.

Tackling AGIs risks requires international cooperation. By working together, nations can develop a shared vision and create guidelines that mitigate the dangers while maximizing AGIs potential. Organizations like OpenAI, the Future of Life Institute, and the Partnership on AI already contribute to this collaborative effort, encouraging responsible AGI development and fostering global dialogue.

Governments have a responsibility to establish regulatory frameworks that encourage safe and ethical AGI development. By setting clear guidelines and promoting transparency, policymakers can help ensure that AGI advancements align with societal values and minimize potential harm.

The parallels between Dont Look Up and the potential dangers of AGI should serve as a wake-up call. While the film satirizes societys apathy, the reality of AGI risks demands our attention. As we forge ahead into this uncharted territory, we must prioritize raising awareness, fostering ethical discussions, and adopting a collaborative approach.

Only then can we address the perils of AGI advancement and shape a future that benefits humanity while minimizing potential harm. By learning from this cautionary tale, we can work together to ensure that AGIs development proceeds with the care, thoughtfulness, and foresight it requires.

Following the Trust Project guidelines, this feature article presents opinions and perspectives from industry experts or individuals. BeInCrypto is dedicated to transparent reporting, but the views expressed in this article do not necessarily reflect those of BeInCrypto or its staff. Readers should verify information independently and consult with a professional before making decisions based on this content.

Read the original:

AI Dangers Viewed Through the Perspective of Don't Look Up - BeInCrypto

ChatGPT, artificial intelligence, and the news – Columbia Journalism Review

When OpenAI, an artificial intelligence startup, released its ChatGPT tool in November, it seemed like little more than a toyan automated chat engine that could spit out intelligent-sounding responses on a wide range of topics for the amusement of you and your friends. In many ways, it didnt seem much more sophisticated than previous experiments with AI-powered chat software, such as the infamous Microsoft bot Taywhich was launched in 2016, and quickly morphed from a novelty act into a racism scandal before being shut downor even Eliza, the first automated chat program, which was introduced way back in 1966. Since November, however, ChatGPT and an assortment of nascent counterparts have sparked a debate not only over the extent to which we should trust this kind of emerging technology, but how close we are to what experts call Artificial General Intelligence, or AGI, which, they warn, could transform society in ways that we dont understand yet. Bill Gates, the billionaire cofounder of Microsoft, wrote recently that artificial intelligence is as revolutionary as mobile phones and the Internet.

The new wave of AI chatbots has already been blamed for a host of errors and hoaxes that have spread around the internet, as well as at least one death: La Libre, a Belgian newspaper, reported that a man died by suicide after talking with a chat program called Chai; based on statements from the mans widow and chat logs, the software appears to have encouraged the user to kill himself. (Motherboard wrote that when a reporter tried the app, which uses an AI engine powered by an open-source version of ChatGPT, it offered different methods of suicide with very little prompting.) When Pranav Dixit, a reporter at BuzzFeed, used FreedomGPTanother program based on an open source version of ChatGPT, which, according to its creator, has no guardrails around sensitive topicsthat chatbot praised Hitler, wrote an opinion piece advocating for unhoused people in San Francisco to be shot to solve the citys homeless crisis, [and] used the n-word.

The Washington Post has reported, meanwhile, that the original ChatGPT invented a sexual harassment scandal involving Jonathan Turley, a law professor at George Washington University, after a lawyer in California asked the program to generate a list of academics with outstanding sexual harassment allegations against them. The software cited a Post article from 2018, but no such article exists, and Turley said that hes never been accused of harassing a student. When the Post tried asking the same question of Microsofts Bing, which is powered by GPT-4 (the engine behind ChatGPT), it repeated the false claim about Turley, and cited an op-ed piece that Turley published in USA Today, in which he wrote about the false accusation by ChatGPT. In a similar vein, ChatGPT recently claimed that a politician in Australia had served prison time for bribery, which was also untrue. The mayor has threatened to sue OpenAI for defamation, in what would reportedly be the first such case against an AI bot anywhere.

According to a report in Motherboard, a different AI chat programReplika, which is also based on an open-source version of ChatGPTrecently came under fire for sending sexual messages to its users, even after they said they werent interested. Replika placed limits on the bots referencing of erotic roleplaybut some users who had come to depend on their relationship with the software subsequently experienced mental-health crises, according to Motherboard, and so the erotic roleplay feature was reinstated for some users. Ars Technica recently pointed out that ChatGPT, for its part, has invented books that dont exist, academic papers that professors didnt write, false legal citations, and a host of other fictitious content. Kate Crawford, a professor at the University of Southern California, told the Post that because AI programs respond so confidently, its very seductive to assume they can do everything, and its very difficult to tell the difference between facts and falsehoods.

Joan Donovan, the research director at the Harvard Kennedy Schools Shorenstein Center, told the Bulletin of the Atomic Scientists that disinformation is a particular concern with chatbots because AI programs lack any way to tell the difference between true and false information. Donovan added that when her team of researchers experimented with an early version of ChatGPT, they discovered that, in addition to sources such as Reddit and Wikipedia, the software was also incorporating data from 4chan, an online forum rife with conspiracy theories and offensive content. Last month, Emily Bell, the director of Columbias Tow Center for Digital Journalism, wrote in The Guardian that AI-based chat engines could create a new fake news frenzy.

As I wrote for CJR in February, experts say that the biggest flaw in a large language model like the one that powers ChatGPT is that, while the engines can generate convincing text, they have no real understanding of what they are writing about, and so often insert what are known as hallucinations, or outright fabrications. And its not just text: along with ChatGPT and other programs have come a similar series of AI image generators, including Stable Diffusion and Midjourney, which are capable of producing believable images, such as the recent photos of Donald Trump being arrestedwhich were actually created by Eliot Higgins, the founder of the investigative reporting outfit Bellingcatand a viral image of the Pope wearing a stylish puffy coat. (Fred Ritchin, a former photo editor at the New York Times, spoke to CJRs Amanda Darrach about the perils of AI-created images earlier this year.)

Three weeks ago, in the midst of all these scares, a body called the Future of Life Institutea nonprofit organization that says its mission is to reduce global catastrophic and existential risk from powerful technologiespublished an open letter calling for a six-month moratorium on further AI development. The letter suggested that we might soon see the development of AI systems powerful enough to endanger society in a number of ways, and stated that these kinds of systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. More than twenty thousand people signed the letter, including a number of AI researchers and Elon Musk. (Musks foundation is the single largest donor to the institute, having provided more than eighty percent of its operating budget. Musk himself was also an early funder of OpenAI, the company that created ChatGPT, but he later distanced himself after an attempt to take over the company failed, according to a report from Semafor. More recently, there have been reports that Musk is amassing servers with which to create a large language model at Twitter, where he is the CEO.)

Some experts found the letter over the top. Emily Bender, a professor of linguistics at the University of Washington and a co-author of a seminal research paper on AI that was cited in the Future of Life open letter, said on Twitter that the letter misrepresented her research and was dripping with #Aihype. In contrast to the letters vague references to some kind of superhuman AI that might pose profound risks to society and humanity, Bender said that her research focuses on how large language models, like the one that powers ChatGPT, can be misused by existing oppressive systems and governments. The paper that Bender co-published in 2021 was called On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? It asked whether enough thought had been put into the potential risks of such models. After the paper came out, two of Benders co-authors were fired from Googles AI team. Some believe that Google made that decision because AI is a major focus for the companys future.

As Chloe Xiang noted for Motherboard, Arvind Narayanan, a professor of computer science at Princeton and the author of a newsletter called AI Snake Oil, also criticized the open letter for making it harder to tackle real AI harms, and characterized many of the questions that the letter asked as ridiculous. In an essay for Wired, Sasha Luccioni, a researcher at the AI company Hugging Face, argued that a pause on AI research is impossible because it is already happening around the world, meaning there is no magic button that would halt dangerous AI research while allowing only the safe kind. Meanwhile, Brian Merchant, at the LA Times, argued that the all doom-and-gloom about the risks of AI may spring from an ulterior motive: apocalyptic doomsaying about the terrifying power of AI makes OpenAIs technology seem important, and therefore valuable.

Are we really in danger from the kind of artificial intelligence behind services like ChatGPT, or are we just talking ourselves into it? (I would ask ChatGPT, but Im not convinced I would get a straight answer.) Even if its the latter, those talking themselves into it now include regulators both in the US and around the world. Earlier this week, the Wall Street Journal reported that the Biden administration has started examining whether some kind of regulation needs to be applied to tools such as ChatGPT, due to the concerns that the technology could be used to discriminate or spread harmful information. Officials in Italy already banned ChatGPT for alleged privacy violations. (They later stated that the chatbot could return if it meets certain requirements.) And the software is facing possible regulation in a number of other European countries.

As governments are working to understand this new technology and its risks, so, too, are media companies. Often, they are doing so behind the scenes. But Wired recently published a policy statement on how and when it plans to use AI tools. Gideon Lichfield, Wireds global editorial director, told the Bulletin of the Atomic Scientists that the guidelines are designed both to give our own writers and editors clarity on what was an allowable use of AI, as well as for transparency so our readers would know what they were getting from us. The guidelines state that the magazine will not publish articles written or edited by AI tools, except when the fact that its AI-generated is the whole point of the story.

On the other side of the ledger, a number of news organizations seem more concerned that chatbots are stealing from them. The Journal reported recently that publishers are examining the extent to which their content has been used to train AI tools such as ChatGPT, how they should be compensated and what their legal options are.

Other notable stories:

ICYMI: Free Evan, prosecute the hostage takers

Continue reading here:

ChatGPT, artificial intelligence, and the news - Columbia Journalism Review

Solving The Mystery Of How ChatGPT And Generative AI Can Surprisingly Pick Up Foreign Languages, Says AI Ethics And AI Law – Forbes

AI is able to pick up additional languages, doing so without magic or pixie dust.getty

The noted author Geoffrey Willans professed to say that anyone who knows no foreign language knows nothing of their own language.

Do you agree with that bold assertion?

Lets give the matter some serious thought.

First, perhaps we can agree that anyone that knows only one language could be labeled as being monolingual. Their native language is whatever language they have come to know. All other languages are said to be foreign to them, thus, if they opt to learn an additional language we could contend that they have picked up a foreign language.

Second, I assume we can concur that anyone that knows two languages could be given the lofty title of being bilingual. For those that know three or more languages, we will reserve the impressive label of being multilingual. An aspect that we might quibble about consists of how much of a language someone must know in order to be considered fluent enough in that language to count as intrepidly knowing an additional language. Hold onto that vexing question since well come back around to it later on herein.

Got a quick question for you.

How are you when it comes to being a language-wielding wizard?

You undoubtedly have friends or colleagues that speak a handful of languages, maybe you do likewise. The odds are that you are probably stronger in just one or two. The other languages are somewhat distant and sketchy in your mind. If push comes to shove, you can at least formulate fundamental sentences and likely comprehend those other languages to some slim degree.

The apex of the language gambit seems to be those amazing polyglots that know a dozen or dozens of languages. It seems nearly impossible to pull off. They imbue languages as easily as wearing a slew of socks and shoes. One moment conveying something elegant in one language and readily jumping over into a different language, nearly at the drop of a hat.

On social media, there are those polyglots that dazzle us by quickly shifting from language to language. They make videos in which they show the surprise and awe of others that admire their ability to effortlessly use a multitude of languages. You have surely wondered whether the polyglot was born with a special knack for languages or whether they truly had to learn many languages in the same way that you learned the two or three that you know. This is the classic question of whether language learning is more so nature versus nurture. We wont be solving that one herein.

There is an important reason that I bring up this weighty discussion overall about being able to use a multitude of languages.

Get yourself ready for the twist.

Maybe sit down and prepare for it.

The latest in generative AI such as ChatGPT and other such AI apps have seemingly been able to pick up additional languages beyond the one or ones that they appeared to have been initially data trained in. AI researchers and AI developers arent exactly sure why this is attainable. We will address the matter and seek to explore various postulated ways in which this can arise.

The topic has recently become a hot one due to an episode of the famed TV show 60 Minutes that interviewed Google executives. During the interviews, a Google exec stated that their AI app was able to engage in Bengali even though it was said to not have been data trained in that language. This enlisted a burst of AI hype, suggesting in this instance that the AI somehow magically made a choice to learn the additional language and proceeded to do so on its own.

Yikes, one might assume, this is surely a sign that these AI apps are converging toward being sentient. How else could the AI make the choice to learn another language and then follow up by learning it? That seems proof positive that contemporary AI is slipping and sliding toward Artificial General Intelligence (AGI), the moniker given to AI that can perform as humans do and otherwise be construed as possessing sentience.

It might be wise to take a deep breath and not fall for these wacky notions.

The amount of fearmongering and anthropomorphizing of AI that is going on right now is beyond the pale. Sadly, it is at times simply a means of garnering views. In other cases, the person or persons involved do not know what they are talking about, or they are being loosey-goosey for a variety of reasons.

In todays column, Id like to set the record straight and examine the matter of how generative AI such as ChatGPT and other AI apps might be able to pick up additional languages. The gist is that this can be mathematically and computationally explained. We dont need to refer to voodoo dolls or create false incantations to get there.

Logic and sensibility can prevail.

Vital Background About Generative AI

Before I get further into this topic, Id like to make sure we are all on the same page overall about what generative AI is and also what ChatGPT and its successor GPT-4 are all about. For my ongoing coverage of generative AI and the latest twists and turns, see the link here.

If you are already versed in generative AI such as ChatGPT, you can skim through this foundational portion or possibly even skip ahead to the next section of this discussion. You decide what suits your background and experience.

Im sure that you already know that ChatGPT is a headline-grabbing AI app devised by AI maker OpenAI that can produce fluent essays and carry on interactive dialogues, almost as though being undertaken by human hands. A person enters a written prompt, ChatGPT responds with a few sentences or an entire essay, and the resulting encounter seems eerily as though another person is chatting with you rather than an AI application. This type of AI is classified as generative AI due to generating or producing its outputs. ChatGPT is a text-to-text generative AI app that takes text as input and produces text as output. I prefer to refer to this as text-to-essay since the outputs are usually of an essay style.

Please know though that this AI and indeed no other AI is currently sentient. Generative AI is based on a complex computational algorithm that has been data trained on text from the Internet and admittedly can do some quite impressive pattern-matching to be able to perform a mathematical mimicry of human wording and natural language. To know more about how ChatGPT works, see my explanation at the link here. If you are interested in the successor to ChatGPT, coined GPT-4, see the discussion at the link here.

There are four primary modes of being able to access or utilize ChatGPT:

The capability of being able to develop your own app and connect it to ChatGPT is quite significant. On top of that capability comes the addition of being able to craft plugins for ChatGPT. The use of plugins means that when people are using ChatGPT, they can potentially invoke your app easily and seamlessly.

I and others are saying that this will give rise to ChatGPT as a platform.

All manner of new apps and existing apps are going to hurriedly connect with ChatGPT. Doing so provides the interactive conversational functionality of ChatGPT. The users of your app will be impressed with the added facility. You will likely get a bevy of new users for your app. Furthermore, if you also provide an approved plugin, this means that anyone using ChatGPT can now make use of your app. This could demonstrably expand your audience of potential users.

The temptation to have your app connect with ChatGPT is through the roof. Even if you dont create an app, you still might be thinking of encouraging your customers or clients to use ChatGPT in conjunction with your everyday services. The problem though is that if they encroach onto banned uses, their own accounts on ChatGPT will also face scrutiny and potentially be locked out by OpenAI.

As noted, generative AI is pre-trained and makes use of a complex mathematical and computational formulation that has been set up by examining patterns in written words and stories across the web. As a result of examining thousands and millions of written passages, the AI can spew out new essays and stories that are a mishmash of what was found. By adding in various probabilistic functionality, the resulting text is pretty much unique in comparison to what has been used in the training set.

There are numerous concerns about generative AI.

One crucial downside is that the essays produced by a generative-based AI app can have various falsehoods embedded, including manifestly untrue facts, facts that are misleadingly portrayed, and apparent facts that are entirely fabricated. Those fabricated aspects are often referred to as a form of AI hallucinations, a catchphrase that I disfavor but lamentedly seems to be gaining popular traction anyway (for my detailed explanation about why this is lousy and unsuitable terminology, see my coverage at the link here).

Another concern is that humans can readily take credit for a generative AI-produced essay, despite not having composed the essay themselves. You might have heard that teachers and schools are quite concerned about the emergence of generative AI apps. Students can potentially use generative AI to write their assigned essays. If a student claims that an essay was written by their own hand, there is little chance of the teacher being able to discern whether it was instead forged by generative AI. For my analysis of this student and teacher confounding facet, see my coverage at the link here and the link here.

There have been some zany outsized claims on social media about Generative AI asserting that this latest version of AI is in fact sentient AI (nope, they are wrong!). Those in AI Ethics and AI Law are notably worried about this burgeoning trend of outstretched claims. You might politely say that some people are overstating what todays AI can do. They assume that AI has capabilities that we havent yet been able to achieve. Thats unfortunate. Worse still, they can allow themselves and others to get into dire situations because of an assumption that the AI will be sentient or human-like in being able to take action.

Do not anthropomorphize AI.

Doing so will get you caught in a sticky and dour reliance trap of expecting the AI to do things it is unable to perform. With that being said, the latest in generative AI is relatively impressive for what it can do. Be aware though that there are significant limitations that you ought to continually keep in mind when using any generative AI app.

One final forewarning for now.

Whatever you see or read in a generative AI response that seems to be conveyed as purely factual (dates, places, people, etc.), make sure to remain skeptical and be willing to double-check what you see.

Yes, dates can be concocted, places can be made up, and elements that we usually expect to be above reproach are all subject to suspicions. Do not believe what you read and keep a skeptical eye when examining any generative AI essays or outputs. If a generative AI app tells you that President Abraham Lincoln flew around the country in a private jet, you would undoubtedly know that this is malarky. Unfortunately, some people might not realize that jets werent around in his day, or they might know but fail to notice that the essay makes this brazen and outrageously false claim.

A strong dose of healthy skepticism and a persistent mindset of disbelief will be your best asset when using generative AI.

Into all of this comes a slew of AI Ethics and AI Law considerations.

There are ongoing efforts to imbue Ethical AI principles into the development and fielding of AI apps. A growing contingent of concerned and erstwhile AI ethicists are trying to ensure that efforts to devise and adopt AI takes into account a view of doing AI For Good and averting AI For Bad. Likewise, there are proposed new AI laws that are being bandied around as potential solutions to keep AI endeavors from going amok on human rights and the like. For my ongoing and extensive coverage of AI Ethics and AI Law, see the link here and the link here, just to name a few.

The development and promulgation of Ethical AI precepts are being pursued to hopefully prevent society from falling into a myriad of AI-inducing traps. For my coverage of the UN AI Ethics principles as devised and supported by nearly 200 countries via the efforts of UNESCO, see the link here. In a similar vein, new AI laws are being explored to try and keep AI on an even keel. One of the latest takes consists of a set of proposed AI Bill of Rights that the U.S. White House recently released to identify human rights in an age of AI, see the link here. It takes a village to keep AI and AI developers on a rightful path and deter the purposeful or accidental underhanded efforts that might undercut society.

Ill be interweaving AI Ethics and AI Law related considerations into this discussion.

Figuring Out The Languages Conundrum

We are ready to further unpack this thorny matter.

I would like to start by discussing how humans seem to learn languages. I do so cautiously in the sense that I am not at all going to suggest or imply that todays AI is doing anything of the same. As earlier stated, it is a misguided and misleading endeavor to associate the human mind with the mathematical and computational realm of contemporary AI.

Nonetheless, some overarching reveals might be useful to note.

We shall begin by considering the use case of humans that know only one language, ergo being monolingual. If you know only one language, there is an interesting argument to be made that you might be able to learn a second language when undergoing persistent exposure to that second language.

Consider for example this excerpt from a research study entitled English Only? Monolinguals In Linguistically Diverse Contexts Have An Edge In Language Learning by researchers Kinsey Bice and Judith Kroll:

The crux is that your awareness of a single language can be potentially leveraged toward learning a second language by mere immersion into that second language. This is described as arising when in a linguistically diverse context. You might not necessarily grasp what those words in the other language mean, but you kind of catch on by exposure to the language and presumably due to your already mindful familiarity with your primary language.

Note that you didnt particularly have to be told how the second language works.

Of course, most people take a class that entails learning a second language and are given explicit instruction. That likely is the prudent path. The other possibility is that via a semblance of mental osmosis or mental gymnastics, you can gradually glean a second language. We can make a reasonable assumption that this is due to already knowing one language. If you didnt know any language at all, presumably you wouldnt have the mental formulation that could so readily pattern onto a second language. You would be starting from a veritable blank slate (well, maybe, since there is debate over what our brains consist of about wired versus learned aspects of language as a default).

This then covers a vital aspect that when you know one language, you possibly do not need explicit teaching about another language to learn that second language. We seem to be able to use a sense of language structure and patterns to figure out a second language. Not everyone can easily do so. It might be that you would struggle mightily over a lengthy period of time to comprehend the second language. A faster path would usually consist of explicit instruction.

But anyway, we can at times make that mental leap.

Lets explore another angle to this.

There is an intriguing postulation that if you learn a second language as a child, the result is that you will be more amenable to learning additional languages as an adult. Those people that are only versed in a single language throughout childhood allegedly will have a harder time learning a second language as an adult.

Consider this excerpt from a research study entitled A Critical Period For Second Language Acquisition: Evidence From 2/3 Million English Speakers by researchers Joshua Hartshorne, Joshua Tenenbaum, and Steven Pinker:

In short, a common suspected phenomenon is that a child that learns only one language during childhood is not somehow formulating a broadened capacity for learning languages all told. If they learn at least a second language, in theory, this is enabling their mind to discern how languages contrast and compare. In turn, this is setting them up for being more versed in that second language than would an adult that learns the second language while an adult. Plus, the child is somewhat prepared to learn a third language or additional languages throughout childhood and as an adult.

The idea too is that an adult that only learned one language as a child has settled into a one-language mode. They havent had to stretch their mind to cope with a second language. Thus, even though as an adult they should be able to presumably learn a second language, they might have difficulty doing so because they had not previously formulated the mentally beneficial generic structures and patterns to tackle a second language.

Please know that there is a great deal of controversy associated with those notions. Some agree with those points, some do not. Furthermore, the explanations for why this does occur, assuming it does occur, vary quite a bit.

If you want a boatload of controversy, heres more such speculation that gets a lot of heated discourse on this topic. Hold onto your hat.

Consider this excerpt from a research study entitled The Benefits Of Multilingualism To The Personal And Professional Development Of Residents Of The US by Judith Kroll and Paola Dussias:

The contention is that individuals with exposure to multiple languages during childhood benefit in many ways including greater openness to other languages and new learning itself.

Life though is not always a bed of roses. A concern is that a child might get confused or confounded when trying to learn more than one language during their childhood. The claim is that a child might not be able to focus on their considered primary language. They could inadvertently mix the other language and end up in a nowhere zone. They arent able to pinpoint their main language, and nor are they able to pinpoint the second language.

Parents are presented with a tough choice. Do you proceed to have your child learn a second language, doing so under the hopes and belief that this is the best means of aiding your child toward language learning and perhaps other advantages of mental stimulation? Or do you focus on one language alone, believing that once they are older it might be better to have them then attempt a second language, rather than doing so as a child?

Much of our existing educational system has landed on the side that knowing a second language as a child seems to be the more prudent option. Schools typically require a minimum amount of second language learning during elementary school, and ramp this up in high school. Colleges tend to do so as well.

Returning to the cited study above, heres what the researchers further state:

The expression often used is that when you know two or more languages, you have formulated a mental juggling capacity that allows you to switch from language to language. To some extent, the two or more languages might be construed as mental competitors, fighting against each other to win in your mental contortions when interpreting language. Some people relish this. Some people have a hard time with it.

I think that covers enough of the vast topic of language learning for the purposes herein. As mentioned, the language arena is complex and a longstanding matter that continues to be bandied around. Numerous theories exist. It is a fascinating topic and one that obviously is of extreme significance to humankind due to our reliance on language.

Imagine what our lives would be like if we had no language to communicate with. Be thankful for our wonderous language capacities, no matter how they seem to arise.

Generative AI And The Languages Affair

We are now in a position to ease into the big question about generative AI such as ChatGPT and the use of languages.

AI researcher Jan Leike at OpenAI tweeted this intriguing question on February 13, 2023:

And within the InstructGPT research paper that was being referred to, this point is made about the languages in the dataset that was used:

This brings us to my list of precepts about generative AI and the pattern-matching associated with languages, specifically:

A quick unpacking might be helpful.

First, realize that words are considered to be objects by most generative AI setups.

As Ive discussed about ChatGPT and GPT-4, see the link here, text or words are divided up into tokens that are approximately 3 letters or so in length. These tokens are various assigned numbers. The numbers are used to do the pattern matching amidst the plethora of words that are for example scanned during the data training of the generative AI. All of it is tokenized and used in a numeric format.

The text you enter as a prompt is encoded into a tokenized number. The response formulated by generative AI is a series of tokenized numbers that are then mapped into the corresponding letters and word segments for presentation to you when using an AI app.

The words being scanned during data training are typically sourced on the Internet in terms of passages of text that are posted on websites. Only a tiny fraction of the text on the Internet is usually involved in this scanning for data training and pattern-matching formulation purposes. A mathematical and computational network structure is devised that attempts to statistically associate words with other words, based on how humans use words and as exhibited via the Internet sites being scanned.

You might find of interest that there are concerns that this widespread text scanning is possibly violating Intellectual Property (IP) rights and entails plagiarism, see my analysis at the link here. It is an issue being pursued in our courts and well need to wait and see how the courts rule on this.

By and large, the generative AI that you hear about is data trained on words from the Internet that are in English, including for example the data training of ChatGPT. Though the bulk of the words encountered during the Internet scanning was in English, there is nonetheless some amount of foreign or other language words that are also likely to be encountered. This could be by purposeful design as guided by the AI developers, but usually, it is more likely a happenstance as to the casting of a shall we say a rather wide net when sauntering across a swath of the Internet.

It is like aiming to catch fish in your fishnet and meanwhile, you just so happen to also get some lobsters, crabs, and other entities along the way.

What happens with those other entities that are caught in the fishnet?

One possibility is that the pattern matching of the generative AI opts to treat those encountered words as a separate category in comparison to the English words being examined. They are outliers in contrast to the preponderance of words being patterned on. In a sense, each such identification of foreign words can be classified as belonging to a different potential language. Per my analogy, if fish were being scanned, the appearance of a lobster or a crab would be quite different, and ergo could be mathematically and computationally placed into a pending separate category.

Unless the AI developers have been extraordinarily cautious, the chances are that some notable level of these non-English words will be encapsulated during the data training across the selected portions of the Internet. One devised approach would be to simply discard any words that are calculated as possibly being non-English. This is not usually the case. Most generative AI is typically programmed to take a broad-brush approach.

The point is that a generative AI is unlikely to be of a single language purity.

Ive been discussing the case of using English as the primary language for being patterned. All other languages would be considered foreign with respect to English in that instance. Of course, we could readily and AI researchers have indeed chosen other languages to be the primary language for their generative AI efforts, which in that instance would mean that English is a foreign language in that case.

For purposes of this discussion, well continue with the case of English as selected as the primary language. The same precepts apply even if some other language is the selected primary language.

We can usually assume that the data training of a generative AI is going to include encounters with a multitude of other languages. If the encounters are sufficiently numerous, the mathematical and computational pattern matching will conventionally treat those as a separate category and pattern match them as based within this newly set aside category. Furthermore, pattern matching can mathematically and computationally broaden as the encounters aid in ferreting out the patterns of one language versus the patterns of a different language.

Here are some handy rules of thumb about what the generative AI is calculating:

As the pattern matching gets enhanced via the encounters with other languages, this also has the side benefit that when encountering yet another newly encountered language. The odds are that less of the language is needed to extrapolate what the language consists of. Smaller and smaller sample sizes can be extrapolated.

There is an additional corollary associated with that hypothesis.

Suppose that an additional language that well refer to for convenience as language Z had not been encountered at all during the data training. Later on, a user decides to enter a prompt into the generative AI that consists of that particular language Z.

You might at first assume that the generative AI would summarily reject the prompt as unreadable because the user is using a language Z that has not previously been encountered. Assuming that the AI developers were mindful about devising the generative AI to fully attempt to respond to any user prompt, the generative AI might shift into a language pattern-matching mode programmatically and try to pattern match on the words that otherwise seem to be outside of the norm or primary language being used.

This could account for the possibility that such a user-entered prompt elicits a surprising response by the generative AI in that the emitted response is also shown in the language Z, or that a response in say English is emitted and has seemingly discerned part of what the prompt was asking about. You see, the newly encountered language Z is parsed based on the pattern-matching generalizations earlier formed during data training as to the structure of languages.

During the 60 Minutes interview with Google executives, the exec that brought up the instance of generative AI that suddenly and surprisingly seemed to be able to respond to a prompt that was in Bengali, further stated that after some number of prompts in Bengali, the generative AI was able to seemingly translate all of Bengali. Heres what James Manyika, Googles SVP, stated: We discovered that with very few amounts of prompting in Bengali, it can now translate all of Bengali.

See the original post here:

Solving The Mystery Of How ChatGPT And Generative AI Can Surprisingly Pick Up Foreign Languages, Says AI Ethics And AI Law - Forbes

SenseAuto Empowers Nearly 30 Mass-produced Models Exhibited at Auto Shanghai 2023 and Unveils Six Intelligent Cabin Products – Yahoo Finance

SHANGHAI, April 20, 2023 /PRNewswire/ -- The Shanghai International Automobile Industry Exhibition ("Auto Shanghai 2023"), themed "Embracing the New Era of the Automotive Industry," has been held with a focus on the innovative changes in the automotive industry brought about by technology. SenseAuto, the Intelligent Vehicle Platform of SenseTime, made its third appearance at the exhibition with the three-in-one product suite of intelligent cabin, intelligent driving, and collaborative cloud, showcasing its full-stack intelligent driving solution and six new intelligent cabin products designed to create the future cabin experience with advanced perception capabilities. Additionally, nearly 30 models produced in collaboration with SenseAuto were unveiled at the exhibition, further emphasizing its industry-leading position.

SenseAuto made its third appearance at Auto Shanghai

At the Key Tech 2023 forum, Prof. Wang Xiaogang, Co-founder, Chief Scientist and President of Intelligent Automobile Group, SenseTime, delivered a keynote speech emphasizing that smart autos provide ideal scenarios for AGI (Artificial General Intelligence) to facilitate closed-loop interactions between intelligent driving and passenger experiences in the "third living space", which presents endless possibilities.

SenseAuto empowers nearly 30 mass-produced models showcased at Auto Shanghai 2023In 2022, SenseAuto Cabin and SenseAuto Pilot products were adapted and delivered to 27 vehicle models with more than 8 million new pipelines. These products now cover more than 80 car models from over 30 automotive companies, confirming SenseAuto's continued leadership in the industry.

In the field of intelligent driving, SenseAuto has established mass-production partnerships with leading automakers in China, such as GAC and Neta. At the exhibition, SenseAuto showcased the GAC AION LX Plus, which leverages SenseAuto's stable surround BEV (Bird 's-Eye-View) perception and powerful general target perception capabilities to create a comprehensive intelligent Navigated Driving Assist (NDA) that is capable of completing various challenging perception tasks. The Neta S, another exhibited model at the show, is also equipped with SenseAuto's full-stack intelligent driving solution which provides consumers with a reliable and efficient assisted driving experience in highway scenarios.

Story continues

In the field of intelligent cabin, SenseAuto is committed to developing the automotive industry's most influential AI empowered platform with the aim of providing extremely safe, interactive, and personalized experiences for users. The NIO ES7 model exhibited supports functions such as driver fatigue alerts, Face ID, and child presence detection. SenseAuto's cutting-edge visual AI technology has boosted the accuracy of driver attention detection by 53% in long-tail scenarios, and by 47% in complex scenarios involving users with narrow-set eyes, closed eyes, and backlighting.

The highly anticipated ZEEKR X model showcased features from SenseAuto's groundbreaking intelligent B-pillar interactive system, a first-of-its-kind innovation that allows for contactless unlocking and entry. Other models on display that boast SenseAuto's cutting-edge DMS (Driver Monitoring System) and OMS (Occupant Monitoring System) technologies include Dongfeng Mengshi 917, GAC's Trumpchi E9, Emkoo, as well as the M8 Master models. Moreover, HiPhi has collaborated with SenseAuto on multiple Smart Cabin features and Changan Yida is equipped with SenseAuto's health management product, which can detect various health indicators of passengers in just 30 seconds, elevating travel safety to new heights.

Six Innovative smart cabin features for an intelligent "third living space"SenseAuto is at the forefront of intelligent cabin innovations, with multi-model interaction that integrates vision, speech, and natural language understanding. SenseTime's newly launched "SenseNova" foundation model set, which introduces avariety of foundation models and capabilities in natural language processing and content generation, such as digital human, opens up numerous possibilities for the smart cabin as a "third living space".

SenseAuto presented a futuristic demo cabin at Auto Shanghai 2023, featuring an AI virtual assistant that welcomes guests and directs them to their seats. In addition, SenseTime's latest large-scale language model (LLM), "SenseChat", interacted with guests and provided personalized content recommendations. The "SenseMirage" text-to-image creation platform has also been integrated with the exhibition cabin for the first time. With the help of SenseTime's AIGC (AI-Generated Content) capabilities, guests can enjoy a fun-filled travel experience with various styles of photos generated for them.

At the exhibition, SenseAuto unveiled six industry-first features including Lip-Reading, Guard Mode, Intelligent Rescue, Air Touch, AR Karaoke and Intelligent Screensaver. With six years of industry experience, SenseAuto has accumulated to date a portfolio of 29 features, of which, over 10 are industry-firsts.

SenseNova accelerates mass-production of smart drivingSenseAuto is revolutionizing the autonomous driving industry with its full-stack intelligent driving solution, which integrates driving and parking. The innovative SenseAuto Pilot Entry is cost-effective solution that uses parking cameras for driving functions. SenseAuto's parking feature supports cross-layer parking lot routing, trajectory tracking, intelligent avoidance, and target parking functions to fulfill multiple parking needs in multi-level parking lots.

SenseNova has enabled SenseAuto to achieve the first domestic mass production of BEV perception and pioneer the automatic driving GOP perception system. SenseAuto is proactively driving innovation in the R&D ofautonomous driving technology, leveraging SenseTime's large model system. Its self-developed UniAD has become the industry's first perception and decision intelligence integrated end-to-end autonomous driving solution. The large model is also used for automated data annotation and product testing, which has increased the model iteration efficiency by hundreds of times.

SenseAuto's success is evident in its partnerships with over 30 automotive manufacturers and more than 50 ecosystem partners worldwide.With plans to bring its technology to over 31 million vehicles in the next few years, SenseAuto is leading the way in intelligent vehicle innovation. Leveraging the capabilities of SenseNova, SenseAuto is poised to continue riding the wave of AGI and enhancing its R&D efficiency and commercialization process towards a new era of human-vehicle collaborative driving.

About SenseTime: https://www.sensetime.com/en/about-index#1

About SenseAuto: https://www.sensetime.com/en/product-business?categoryId=1095&gioNav=1

(PRNewsfoto/SenseTime)

Cision

View original content to download multimedia:https://www.prnewswire.com/apac/news-releases/senseauto-empowers-nearly-30-mass-produced-models-exhibited-at-auto-shanghai-2023-and-unveils-six-intelligent-cabin-products-301801980.html

SOURCE SenseTime

Continued here:

SenseAuto Empowers Nearly 30 Mass-produced Models Exhibited at Auto Shanghai 2023 and Unveils Six Intelligent Cabin Products - Yahoo Finance

Tim Sweeney, CD Projekt, and Other Experts React to AI’s Rise, and … – IGN

This feature is part of AI Week. For more stories, including How AI Could Doom Animation and comments from experts like Tim Sweeney, check out our hub.

All anyone wants to talk about in the games industry is AI. The technology once a twinkle in the eye of sci-fi writers and futurists has shot off like a bottle rocket. Every day we're greeted with fascinating and perturbing new advances in machine learning. Right now, you can converse with your computer on ChatGPT, sock-puppet a celebrity's voice with ElevenLabs, and generate a slate of concept art with MidJourney.

It is perhaps only a matter of time before AI starts making significant headway in the business of game development, so to kick off AI week at IGN, we talked to a range of experts in the field about their hopes and fears for this brave new world, and some are more skeptical than you'd expect.

AI Week Roundtable: Meet the Games Industry Experts

Pawel Sasko, CD Projekt Red Lead Quest Designer: I really believe that AI, and AI tools, are going to be just the same as when Photoshop was invented. You can see it throughout the history of animation. From drawing by hand to drawing on a computer, people had to adapt and use the tools, and I think AI is going to be exactly that. It's just going to be another tool that we'll use for productivity and game development.

Tim Sweeney, Epic Games CEO: I think there's a long sorting out process to figure out how all that works and it's going to be complicated. These AI technologies are incredibly effective when applied to some really bulk forms of data where you can download billions of samples from existing project and train on them, but that works for text and it works for graphics and maybe it will work for 3D objects as well, but it's not going to work for higher level constructs like games or the whole of the video game. There's just no training function that people know that can drive a game like that. I think we're going to see some really incredible advances and actual progress mixed in with the hype cycle where a lot of crazy stuff is promised. Nobody's going to be able to deliver.

Michael Spranger, COO of Sony AI: I think AI is going to revolutionize the largeness of gaming worlds; how real they feel, and how you interact with them. But I also think it's going to have a huge impact on production cycles. Especially in this era of live-services. We'll produce a lot more content than we did in the past.

Julian Togelius, Associate Professor of Computer Science at New York University, and co-author of the textbook Artificial Intelligence and Games: Long-term, we're going to see every part of game development co-created AI Designers will collaborate with AI on everything from prototyping, to concept art, to mechanics, balancing, and so on. Further on, we might see games that are actually designed to use AI during its runtime.

Pawel Sasko: There's actually many companies doing internal R&D of a specific implementation of not MidJourney especially, but literally just art tools like this, so that when you're in early concept phases, you're able to generate as many ideas as you can and just basically pick whatever works actually for you and then give it to an artist who actually developed that direction. I think it's a pretty intriguing direction because it opens up the doors that you wouldn't think of. And again, as an artist, we are just always limited by our skills that come up from all the life experiences and everything we have consumed artistically, culturally before. And AI doesn't have this limitation in a way. We can feed it so many different things, therefore it can actually propose so many different things that we wouldn't think of. So I think as a starting point or maybe just as a brainstorming tool, this could be interesting.

Michael Spranger: I think of AI as a creativity unlocking tool. There are so many more things you can do if you have the right tools. We see a rapid deployment of impact of this technology in content creation possibilities from 3D, to sound, to musical experiences, to what you're interacting with in a world. All of that is going to get much better.

Julian Togelius: Everybody looks at the image generation and text generation and say, 'Hey, we can just pop that into games.' And, of course, we see like proliferation of unserious, sometimes venture capital found that actors coming in and claiming that they're going to do all of your Game Arts with MidJourney these people usually don't know anything about game development. There's a lot of that going around. So I like to say that generating just an image is kind of the easy part. Every other part of game content, including the art, has so many functional aspects. Your character model must work with animations, your level must be completable. That's the had part.

Tim Sweeney: It's not synthesizing amazing new stuff, it's really just rewriting data that already exists. So, either you ask it to write a sorting algorithm in Python and it does that, but it's really just copying the structure of somebody else's code that it trained on. You tell it to solve a problem that nobody's solved before or the data it hasn't seen before and it doesn't have the slightest idea what to do about it. We have nothing like artificial general intelligence. The generated art characters have six or seven fingers, they just don't know that people have five fingers. They don't know what fingers are and they don't know how to count. They don't really know anything other than how to reassemble pixels in a statistically common way. And so, I think we're a very long way away from that, providing the kind of utility a real artist provides.

Sarah Bond, Xbox Head of Global Gaming Partnership and Development: We're in the early days of it. Obviously we're in the midst of huge breakthroughs. But you can see how it's going to greatly enhance discoverability that is actually customized to what you really care about. You can actually have things served up to you that are very, very AI driven. "Oh my gosh, I loved Tunic. What should I do next?

Tim Sweeney: I'm not sure yet. It's funny, we're pushing the state of the art in a bunch of different areas, but [Epic] is really not touching generative AI. We're amazed at what our own artists are doing in their hobby projects, but all these AI tools, data use is under the shadow, which makes the tools unusable by companies with lawyers essentially because we don't know what authorship claims might exist on the data.

Julian Togelius: I don't think it will affect anyone more than any other technology that forces people to learn new tools. You have to keep learning new tools or otherwise you'll become irrelevant. People will become more productive, and generate faster iterations. Someone will say, "Hey, this is a really interesting creature you've created, now give me 10,000 of those that differ slightly." People will master the tools. I don't think they will put anyone out of a job as long as you keep rolling with the punches.

Pawel Sasko: I think that the legal sphere is going to catch up with AI generation eventually, with what to do in these situations to regulate them. I know a lot of voice actors are worried about the technology, because the voice is also a distinct element of a given actor, not only the appearance and the way of acting. Legal is always behind us.

Michael Spranger: The relationship with creative people is really important to us. I don't think that relationship will change. When I go watch a Stanley Kubrick movie, I'm there to enjoy his creative vision. For us, it's important to make sure that those people can preserve and execute those creative visions, and that AI technology is a tool that can help make that happen.

Julian Togelius: Definitely. If you have a team that has deep expertise in every field, you're at an advantage. But I think we're gonna get to the point where, like, you only need to know a few fields to make a game, and have the AI tools be non-human standings for other fields of expertise. If you're a two-person team and you don't have an animator, you can ask the AI to do the animation for you. The studio can make a nice looking game even though they don't have all the resources. That's something I'm super optimistic about.

Tim Sweeney: I think the more common case, which we're seeing really widely used in the game industry is an artist does a lot of work to build an awesome asset, but then the procedural systems and the animation tools and the data scanning systems just blow it up to an incredible level.

Michael Spranger: Computer science in general has a very democratizing effect. That is the history of the field. I think these tools might inspire more people to express their creativity. This is really about empowering people. We're going to create much more content that's unlocked with AI, and I think it will have a role to play in both larger and smaller studios.

Michael Spranger: I think what makes this different is that the proof is in the pudding. Look at what Kazunori Yamuchi said about GT Sophy, [the AI-powered driver recently introduced to Gran Turismo 7]: there was a 25-year period where they built the AI in Gran Turismo in a specific way, and Yamuchi is basically saying that this is a new chapter. That makes a difference for me. When people are saying, "I haven't had this experience before with a game. This is qualitatively different." It's here now, you can experience it now.

Kajetan Kasprowicz, CD Projekt Red Cinematic Designer: Someone at GDC once gave a talk that basically said, "Who will want to play games that were made by AI?" People will want experiences created by human beings. The technology is advancing very fast and we kind of don't know what to do with it. But I think there will be a consensus on what we want to do as societies.

Julian Togelius: AI has actual use-cases, and it works, whereas all of the crypto shit was ridiculous grifting by shameless people. I hate that people associate AI with that trend. On the other hand you have something like VR, which is interesting technology that may, or may not, be ready for the mass market someday. Compare that to AI, which has hundreds of use-cases in games and game development.

Luke Winkie is a freelance writer at IGN.

See original here:

Tim Sweeney, CD Projekt, and Other Experts React to AI's Rise, and ... - IGN

GCHQ chiefs warning to ministers over risks of AI – The Independent

GCHQ chief Sir Jeremy Fleming has warned ministers about the risks posed by artificial intelligence (AI), amid growing debates about how to regulate the rapidly developing technology.

Downing Street gave little detail about what specific risks the GCHQ boss warned of but said the update was a clear-eyed look at the potential for things like disinformation and the importance of people being aware of that.

Prime minister Rishi Sunak used the same Cabinet meeting on Tuesday to stress the importance of AI to UK national security and the economy, No 10 said.

A readout of the meeting said ministers agreed on the transformative potential of AI and the vital importance of retaining public confidence in its use and the need for regulation that keeps people safe without preventing innovation.

The prime minister concluded Cabinet by saying that given the importance of AI to our economy and national security, this could be one of the most important policies we pursue in the next few years which is why we must get this right, the readout added.

Asked if the potential for an existential threat to humanity from AI had been considered, the PMs official spokesperson said: We are well aware of the potential risks posed by artificial general intelligence.

The spokesperson said Michelle Donelans science ministry was leading on that issue, but the governments policy was to have appropriate, flexible regulation which can move swiftly to deal with what is a changing technology.

As the public would expect, we are looking to both make the most of the opportunities but also to guard against the potential risk, the spokesperson added.

The government used the recent refresh of the integrated review to launch a new government-industry AI-focused task force on the issue, modelled on the vaccines task force used during the Covid pandemic.

Italy last month said it would temporarily block the artificial intelligence software ChatGPT amid global debate about the power of such new tools.

The AI systems powering such chatbots, known as large language models, are able to mimic human writing styles based on the huge trove of digital books and online writings they have ingested.

Mr Sunak, who created a new Department for Science, Innovation & Technology in a Whitehall reshuffle earlier this year, is known to be enthusiastic about making the UK a science superpower.

See the original post here:

GCHQ chiefs warning to ministers over risks of AI - The Independent

How would a Victorian author write about generative AI? – Verdict

The Victorian era was one transformed by the industrial revolution. The telegraph, telephone, electricity, and steam engine are key examples of life-changing technologies and machinery.

It is not surprising, therefore, that this real-life innovation sparked the imagination of writers like Robert Stevenson, Jules Verne, and H.G Wells.

These authors imagined time machines, space rockets, and telecommunication. Even Mark Twain wrote about mind-travelling, imagining a technology similar to the modern-day internet in 1898. Motifs such as utopias and dystopias became popular in literature as academics debated the scientific, cultural, and physiological impact of technology.

Robert Stevensons Strange Case of Dr Jekyll and Mr Hyde is another classic example. It explores the dangers of unchecked ambition in scientific experimentation through the evil, murderous alter ego Mr. Hyde. Mary Shellys Frankenstein unleashes a monster, a living being forged out of non-living materials. These stories spoke to the fear among the Victorian pious society that playing God would have deadly consequences.

In an FT op-ed, AI expert and angel investor Ian Hogarth refers to artificial general intelligence (AGI) as God-like AI for its predicted ability to generate new scientific knowledge independently and perform all human tasks. The article displayed both excitement and trepidation at the technologys potential.

According to GlobalData, there have been over 5,500 news items relating to AI in the past six months. Opinion ranges from unbridled optimism that AI will revolutionize the world, to theories of an apocalyptic future where machines will rise to render humanity obsolete.

In April 2023, The Future of Life Institute wrote an open letter calling for a six-month pause on developing AI systems that can compete with human-level intelligence, co-signed by tech leaders such as Elon Musk and Steve Wozniak. The letter posed the question Should we risk the loss of control of our civilization? as AI becomes more powerful. Over 3,000 people have signed it.

These arguments are the same as the talking points of Victorian sceptics on technological advancements. Philosopher and economist John Stuart Mill discussed in an essay entitled Civilization in which he wrote about the uncorrected influences of technological development on society, specifically the printing press, which he predicted would dilute the voice of intellectuals by making publishing accessible to the masses and commercialize the spread of knowledge. He called for national institutions to mitigate this impact.

Both were concerned with how technology could disrupt social norms and the labour market and wreak havoc on society as we know it. Both called for government oversight and regulation during a time of intense scientific progress.

In the 1800s, the desire to push boundaries won out over concerns, breeding a new class of innovators and entrepreneurs. Without this innovative spirit, Alexander Graham Bell would not have invented the telephone in 1876, and Joseph Swan would not have invented the lightbulb in 1878. They were the forerunners to the Bill Gates and Jeff Bezos of this world.

While technology advances at a rapid pace, human behaviour remains consistent. In other words, advances in technology will always divide opinions between those who view it as a new frontier to explore and those who consider it to be Frankensteins monster. We can heed the warnings when it comes to unregulated technological developments and still appreciate the opportunities ingenuity brings. This is especially pertinent when it comes to artificial intelligence.

See the original post:

How would a Victorian author write about generative AI? - Verdict

Is the current regulatory system equipped to deal with AI? – The Hindu

The growth of Artificial Intelligence (AI) technologies and their deployment has raised questions about privacy, monopolisation and job losses. In a discussion moderated by Prashanth Perumal J., Ajay Shah and Apar Gupta discuss concerns about the economic and privacy implications of AI as countries try to design regulations to prevent the possible misuse of AI by individuals and governments. Edited excerpts:

Should we fear AI? Is AI any different from other disruptive technologies?

Ajay Shah: Technological change improves aggregate productivity, and the output of society goes up as a result. People today are vastly better off than they were because of technology, whether it is of 200 years ago or 5,000 years ago. There is nothing special or different this time around with AI. This is just another round of machines being used to increase productivity.

Apar Gupta: I broadly echo Ajays views. And alongside that, I would say that in our popular culture, quite often we have people who think about AI as a killer robot that is, in terms of AI becoming autonomous. However, I think the primary risks which are emerging from AI happen to be the same risks which we have seen with other digital technologies, such as how political systems integrate those technologies. We must not forget that some AI-based systems are already operational and have been used for some time. For instance, AI is used today in facial recognition in airports in India and also by law-enforcement agencies. There needs to be a greater level of critical thought, study and understanding of the social and economic impact of any new technology.

Ajay Shah: If I may broaden this discussion slightly, theres a useful phrase called AGI, which stands for artificial general intelligence, which people are using to emphasise the uniqueness and capability of the human mind. The human mind has general intelligence. You could show me a problem that I have never seen before, and I would be able to think about it from scratch and be able to try to solve it, which is not something these machines know how to do. So, I feel theres a lot of loose talk around AI. ChatGPT is just one big glorified database of everything that has been written on the Internet. And it should not be mistaken for the genuine human capability to think, to invent, to have a consciousness, and to wake up with the urge to do something. I think the word AI is a bit of a marketing hype.

Do you think the current regulatory system is equipped enough to deal with the privacy and competition threats arising from AI?

Ajay Shah: One important question in the field of technology policy in India is about checks and balances. What kind of data should the government have about us? What kind of surveillance powers should the government have over us? What are the new kinds of harm that come about when governments use technologies in a certain way? There is also one big concern about the use of modern computer technology and the legibility of our lives the way our lives are laid bare to the government.

Apar Gupta: Beyond the policy conversation, I think we also need laws for the deployment of AI-based systems to comply with Supreme Court requirements under the right to privacy judgment for specific use-cases such as facial recognition. A lot of police departments and a lot of State governments are using this technology and it comes with error rates that have very different manifestations. This may result in exclusion, harassment, etc., so there needs to be a level of restraint. We should start paying greater attention to the conversations happening in Europe around AI and the risk assessment approach (adopted by regulators in Europe and other foreign countries) as it may serve as an influential model for us.

Ajay Shah: Coming to competition, I am not that worried about the presence or absence of competition in this field. Because on a global scale, it appears that there are many players. Already we can see OpenAI and Microsoft collaborating on one line of attack; we can also see Facebook, which is now called Meta, building in this space; and of course, we have the giant and potentially the best in the game, Google. And there are at least five or 10 others. This is a nice reminder of the extent to which technical dynamism generates checks and balances of its own. For example, we have seen how ChatGPT has raised a new level of competitive dynamics around Google Search. One year ago, we would have said that the world has a problem because Google is the dominant vendor among search engines. And that was true for some time. Today, suddenly, it seems that this game is wide open all over again; it suddenly looks like the global market for search is more competitive than it used to be. And when it comes to the competition between Microsoft and Google on search, we in India are spectators. I dont see a whole lot of value that can be added in India, so I dont get excited about appropriating extraterritorial jurisdiction. When it comes to issues such as what the Indian police do with face recognition, nobody else is going to solve it for us. We should always remember India is a poor country where regulatory and state capacity is very limited. So, the work that is done here will generally be of low quality.

Apar Gupta: The tech landscape is dominated by Big Tech, and its because they have a computing power advantage, a data advantage, and a geopolitical advantage. It is possible that at this time when AI is going to unleash the next level of technology innovation, the pre-existing firms, which may be Microsoft, Google, Meta, etc., may deepen their domination.

How do you see India handling AI vis--vis Chinas authoritarian use of AI?

Ajay Shah: In China, they have built a Chinese firewall and cut off users in China from the Internet. This is not that unlike what has started happening in India where many websites are being increasingly cut off from Indian users. The people connected with the ruling party in China get monopoly powers to build products that look like global products. They steal ideas and then design and make local versions in China, and somebody makes money out of that. Thats broadly the Chinese approach and it makes many billion dollars of market cap. But it also comes at the price of mediocrity and stagnation, because when you are just copying things, you are not at the frontier and you will not develop genuine scientific and technical knowledge. So far in India, there is decent political support for globalisation, integration into the world economy and full participation by foreign companies in India. Economic nationalism, where somehow the government is supposed to cut off foreign companies from operating in India, is not yet a dominant impulse here. So, I think that there is fundamental superiority in the Indian way, but I recognise that there is a certain percentage of India that would like the China model.

Apar Gupta: I would just like to caution people who are taken in by the attractiveness of the China model it relies on a form of political control, which itself is completely incompatible in India.

How do you see Zoho Corporation CEO Sridhar Vembus comments that AI would completely replace all existing jobs and that demand for goods would drop as people lose their jobs?

Ajay Shah: As a card-carrying economist, I would just say that we should always focus on the word productivity. Its good for society when human beings produce more output per unit hour as that makes us more prosperous. People who lose jobs will see job opportunities multiplying in other areas. My favourite story is from a newspaper column written by Ila Patnaik. There used to be over one million STD-ISD booths in India, each of which employed one or two people. So there were 1-2 million jobs of operating an STD-ISD booth in India. And then mobile phones came and there was great hand-wringing that millions of people would lose their jobs. In the end, the productivity of the country went up. So I dont worry so much about the reallocation of jobs. The labour market does this every day prices move in the labour market, and then people start choosing what kind of jobs they want to do.

Ajay Shah is Research Professor of Business at O.P. Jindal Global University, Sonipat; Apar Gupta is executive director of the Internet Freedom Foundation

Go here to see the original:

Is the current regulatory system equipped to deal with AI? - The Hindu