Daily Archives: February 22, 2024

The Potential Impact of ‘Disease X’ on Federalism in the U.S. – Medriva

Posted: February 22, 2024 at 8:00 pm

When the World Health Organization (WHO) introduced the term Disease X into their blueprint of diseases in February 2018, they were prophetically acknowledging the potential of an unknown pathogen causing a serious international epidemic. Fast forward to today, COVID-19, caused by an unknown etiology, perfectly fits the description of the first Disease X. This situation has sparked an intriguing discourse on the potential impact of a hypothetical Disease X on the concept of federalism in the United States. As we explore this thoughtful narrative, we will delve into the challenges posed by a nationwide health crisis to the federalist system, with particular emphasis on state autonomy, public health policy, and the role of the federal government.

The concept of Disease X represents the understanding that a severe global epidemic could be triggered by an unknown pathogen. This idea has been cemented by the COVID-19 pandemic. Programs like the Coalition for Epidemic Preparedness Innovations (CEPI), with its 3.5 billion 5-year plan, and the US National Institutes of Allergies and Infectious Diseases (NIAID) Pandemic Preparedness Plan, are aimed at shortening vaccine development timelines and preparing for potential pandemics. The Disease X Act of 2023 further expands priorities to include viral threats that could cause a pandemic.

The federalist design of US laws is a considerable impediment to implementing nationwide community mitigation measures for pandemics, according to a Stanford Law analysis. This structure presents a significant challenge during a nationwide health crisis. State autonomy and the division of power between state and federal governments can potentially hinder the coordination of a unified response to Disease X. This challenge is further complicated by legal reforms adopted by states, which imposed substantive and procedural restrictions on public health authority, such as prohibiting vaccines, mask mandates, and restricting religious gatherings.

The role of the federal government during a major health emergency is crucial. The U.S. CDC vaccine advisory committee, for instance, develops recommendations for U.S. immunizations. However, the applicability of these recommendations largely depends on the states once published in the CDCs MMWR. This dependency on state decisions underscores the delicate balance between state and federal authority during a health crisis. The impact of systematic racism, economic inequality, mass incarceration, and labor market inequalities on COVID-19 disparities further complicates this balance.

As we contemplate the future, the adoption of crisis communication strategies by local governments during pandemics is key. Factors such as school and business closures, efficacy beliefs, and community vulnerability significantly shape these efforts. Furthermore, funding from measures like the CARES Act can enhance local governments capacity to implement these strategies.

In conclusion, the potential impact of a hypothetical Disease X on the federalist system in the U.S. poses thought-provoking questions about state autonomy, public health policy, and the role of the federal government. While our current federalist system presents challenges, it also provides opportunities for adaptive strategies that can help the nation better prepare for future health emergencies.

Read more:

The Potential Impact of 'Disease X' on Federalism in the U.S. - Medriva

Posted in Federalism | Comments Off on The Potential Impact of ‘Disease X’ on Federalism in the U.S. – Medriva

‘People’s Charter’ Puts Federalism at The Heart of Myanmar’s Democratic Future – The Irrawaddy

Posted: at 8:00 pm

The Peoples Representatives Committee for Federalism (PRCF) published its constitution for a federal democracy on Feb. 12.

The committee comprises 12 political parties: the Shan Nationalities League for Democracy, Arakan League for Democracy, Karen National Party, Zomi Congress for Democracy, Democratic Party for a New Society, United Nationalities Democracy Party, Danu Nationalities Democracy Party, Daingnet National Development Party, Mro National Democracy Party, Karen National Party, Shan State Kokang Democratic Party and Mon Affairs Association.

Previously known as the PRF, the committee changed its name to PRCF in March 2021.

Sai Kyaw Nyunt, a joint secretary of the Shan Nationalities League for Democracy, recently spoke with The Irrawaddy about the objectives of the constitution and its most important features.

What is the intention of publishing a constitution?

It has been nearly two years since we drafted the constitution in 2022. So, we decided that it was time to publish it.

What is the PRCF?

The PRCF was formed after the 2021 coup. It comprises primarily members of the United Nationalities Alliance and their partners.

The PRCF mentioned three main tasks in its statement about publishing its constitution. Can you elaborate on them?

We cant accept any form of dictatorship, either military dictatorship or civilian dictatorship. The conflict in our country since independence is deeply connected to the constitution. The 1974 constitution did not meet the wishes of the people and the same is true of the 2008 constitution.

In our view, federalism is the best [form of government] for this highly diverse and multi-ethnic country. But federalism alone is not enough. There must also be democracy. So, there is a need for a federal, democratic constitution. But again, a constitution alone is not enough. Peaceful co-existence is also critically important for us to come together to form and maintain a union.

How do you see the current political landscape in Myanmar?

Myanmar is at war now. We are politicians so we dont know much about military affairs. Military solutions alone cant solve problems in a country. Space for politics is necessary. It is more powerful than military action in terms of fulfilling the wishes of the people. We want things handled peacefully.

So, your political parties prefer non-violence?

We dont want to say which is right and which is wrong. I am only talking about our tendency. By political means, I mean you dont necessarily have to establish a party and contest the election. You may oppose the voting, and release statements about your views. These are all political means. Dialogue is also a political means. This is what we believe.

What drove the PRCF to design a constitution?

Eleven of the 12 organizations in the PRCF are political parties. We believe certain conditions must be met for our country to have greater peace and stability. So, we have designed the constitution, outlining the conditions that we think are necessary to have peace and stability. Those parties have won votes and support from people in their respective constituencies. So, we designed the constitution to convey our idea about an ideal union.

What are the salient points about your constitution?

We refer to four documents: the fundamental principles of the PRCF, the fundamental principles in a federal democracy charter, the constitution from the Federal Constitution Drafting and Coordinating Committee, and the constitution from the UNA and allies. Our constitution touches upon new topics, such as financial matters, relations between government agencies, and administration and public services.

So, is it fair to say the constitution drafted by the PRFC is one that reflects the federal democracy charter declared by anti-regime political forces?

We cant say so. Many organizations, including ethnic armed organizations, were involved in designing the federal democracy charter. Our constitution was drafted solely by PRCF members, but it can be used as a draft for all the stakeholders to discuss in the future.

Will you accept recommendations, if there are any, to your constitution?

We are willing to accept any recommendation that does not go against our principles.

The military regime upholds the 2008 Constitution. What will you say if they say they dont accept your constitution?

We represent people to a certain extent, and we live among the people. So, the constitution represents our view of what this country should be like. Everyone is aware that one group or organization representing all the others was not successful. We need to try to write a constitution that is acceptable to all by negotiating between all stakeholders.

How did stakeholders in the country respond to your constitution?

No one has yet strongly responded to our constitution. It was only published recently, and perhaps stakeholders are still studying it. Our constitution is largely based on documents of ethnic armed organizations, ethnic political organizations and ethnic Bamar organizations. So, there wont be much difference between ours and theirs.

There might be differences in the way we operate, but I dont think there will be much disagreement regarding policies. The policies of the regime and the military, however, can be markedly different from ours. In the future, we will have to accept what is best for the people.

What is the PRCFs next step?

We established political parties to do our share for the country. So, we will continue to work in our way to restore peace and build a country that all citizens want to see.

Read the rest here:

'People's Charter' Puts Federalism at The Heart of Myanmar's Democratic Future - The Irrawaddy

Posted in Federalism | Comments Off on ‘People’s Charter’ Puts Federalism at The Heart of Myanmar’s Democratic Future – The Irrawaddy

Siddaramaiah vs Modi: The ‘cess-y’ mess in fiscal federalism – Deccan Herald

Posted: at 8:00 pm

Something unique and rare in Indian political history happened recently. Karnataka Chief Minister Siddaramaiah, Deputy Chief Minister D K Shivakumar, and many senior cabinet ministers went to Delhi and sat in a dharna for a My State My Tax protest. They were joined by ministers and leaders from the other southern states of Kerala and Tamil Nadu and supported in spirit by Telangana. It was an extraordinary development where democratically elected heads of Indias states were forced to go to Delhi to demand their share of tax revenues and taxation rights. No taxation without representation was the slogan for the American independence struggle against the British. Southern states in India are now protesting against Representation without taxation!

With the southern states protests and the Supreme Courts recent ruling, the late BJP finance minister Arun Jaitleys contribution of the phrases cooperative federalism and electoral bonds to Indias political lexicon have been rendered dubious and hollow by the Modi government.

Elected governments across the world rely on direct and indirect taxes for revenues to implement schemes and fulfil their electoral promises. As per the Constitution, state governments in India do not have powers to levy direct income and corporate taxes, unlike in other federal nations. After GST, state governments lost their exclusive powers for indirect taxes, too. They are only left with powers to tax sin goods, fuel, property, electricity, and agriculture, which constitute a small slice of overall tax revenues. To put it simply, post-GST, democratically elected state governments in India are forced to be almost entirely dependent on the Union government for resources.

To make matters worse, the Modi government, true to its governance style, has politicised Indias federalism through duplicitous means of cesses and surcharges to garner greater share of tax revenues for itself and minimise states share.

When a Karnataka resident buys a certain good or service and pays Rs 100 as central taxes on it, the Union government keeps Rs 58 of it and shares Rs 42 with the states. But for the same transaction, if Rs 100 is charged as cess by the Union government, then it gets to keep all of it and not have to share it with the states. This is a quirk and a relic of Indias historical taxation laws. So, a cooperative federalism-minded Union government will try to minimise cess and maximise tax revenues which can be shared with state governments for their governance. Unsurprisingly, the Modi government did the exact opposite in its decade-long tenure.

Cesses and surcharges have nearly doubled as a share of revenues from 12% to 20% of overall tax revenues during Modis tenure. In 2014, overall tax revenues collected by both the Union and state governments was Rs 18 lakh crore, which rose to about Rs 46 lakh crore by 2023. But a whopping 10% of this increase came from cesses and surcharges, depriving state governments of nearly Rs 3 lakh crore. This is a huge amount, and hence state governments are crying foul. This has impacted every state government but because of the extreme high-command culture and imposition of their will on BJP-ruled states, BJP Chief Ministers can only grumble in private rather than join Karnataka, Kerala, Tamil Nadu, Bengal and Telangana in an overt protest. The Modi governments cess mess has stained Indias fabric of federalism.

This deceit by the Modi government is what has angered the high tax-contributing southern states and prompted them to question the transfer of their tax revenues to poorer northern states. The average person in Karnataka or Tamil Nadu pays Rs 20,000 annually in taxes while the average person in Madhya Pradesh or Uttar Pradesh pays just Rs 4,500. But the average person in Bihar, UP or Madhya Pradesh get back Rs 260 for every Rs 100 they pay in taxes, while the average Kannadiga gets back only Rs 40. Over the course of Modis tenure, this gap has only widened, and little progress has been made in bridging either the fiscal or the development gap between the richer and poorer states. Now, the contributing states are questioning the need for such an extreme skew in the distribution of tax revenues.

The very idea of India as a Union of states is now precarious. There is a complete breakdown of trust and trustworthiness between the Union and states. This growing banyan tree of distrust between states was sown by the duplicitous fiscal approach of the Modi government, watered by the imposition of a one nation one policy framework, branched by the extreme politicisation of institutions such as ED, CBI, Income Tax, Election Commission, and tended by governor politics. It is no secret that most states harbour deep disenchantment with the Modi governments anti-federal style of governance. It so happens that the more developed southern states, which are not ruled by the BJP, are able to express their resentment more freely than their Maharashtra, Haryana and Gujarat counterparts.

Finance Minister Nirmala Sitharaman exemplifies this disdain for states, especially those that are governed by non-BJP parties, with her scornful public rebukes and shallow pomposity, evident in the white paper she released in parliament recently. It claimed that Indias economy and infrastructure have grown in the last decade, all due to the untiring efforts and the inordinate skills of Narendra Modi. It was like parents celebrating the growth in age of their child from 5 to 15 after a decade. Even if cricketer Ravindra Jadeja or actor Akshay Kumar had been Prime Minister in this period, GDP would have grown, more toilets and houses constructed, more airports, ports and highways built, and India would have been the Chair of the G-20. The real question is not whether the child has grown in age, which is largely inevitable, but how tall, healthy, and happy is the child for her age. Ask the states!

(Published 17 February 2024, 20:21 IST)

Go here to see the original:

Siddaramaiah vs Modi: The 'cess-y' mess in fiscal federalism - Deccan Herald

Posted in Federalism | Comments Off on Siddaramaiah vs Modi: The ‘cess-y’ mess in fiscal federalism – Deccan Herald

Keyboard search warrants and the Fourth Amendment | Brookings – Brookings Institution

Posted: at 7:59 pm

Does a search warrant ordering Google to give law enforcement information regarding internet searches containing specific keywords made during a particular window of time violate the Fourth Amendment? This question was before the Colorado Supreme Court in 2023 and is now before the Pennsylvania Supreme Court.

The Fourth Amendment protects against unreasonable searches and seizures by the government. The government generally needs a warrant to perform a search that infringes a reasonable expectation of privacy.

As the Supreme Court explained in a 1981 decision, the Fourth Amendment was intended partly to protect against the abuses of the general warrants that had occurred under English rule prior to 1776. A general warrant specified only an offensetypically seditious libeland left to the discretion of the executing officials the decision as to which persons should be arrested and which places should be searched.

To guard against this sort of misuse of government investigative power, the Fourth Amendment provides that search warrants can only be issued upon probable cause and that they must describe with particularity the place to be searched, and the persons or things to be seized. Probable cause and particularity in light of 21st century investigative technologies, such as keyword searches, raise novel and important questions that courts have only recently begun to consider.

The Colorado case, Colorado v. Seymour, arose from an investigation of a 2020 arson in which five people were killed. Police in Denver obtained a warrant requiring Google to provide the internet protocol (IP) addresses for devices, as well as a Google-assigned device identifier, for any Google accounts used to conduct searches for the homes address in the 15 days preceding the fire.

Given an IP address, it is often (though not always) straightforward to identify the specific electronic device involved and subsequently the person who was using that device. Using the information obtained pursuant to the warrant, police identified and charged a suspect with crimes including murder, arson, and burglary.

In an October 2023 ruling, the Colorado Supreme Court cast doubt on whether the Colorado suspect had a reasonable expectation of privacy under the Fourth Amendment for internet searches. However, the court found that the suspect had a reasonable expectation of privacy in his Google search history under article II, section 7 of the Colorado Constitution. While that portion of the Colorado Constitution has text very similar to the Fourth Amendment, the court cited Colorado case law stating that we are not bound by the United States Supreme Courts interpretation of the Fourth Amendment when determining the scope of state constitutional protections.

Given that the search implicated a reasonable expectation of privacy, the next question is whether the warrant met the particularity and probable cause requirements of the Fourth Amendment. With respect to particularity, the court conclude[d] that the warrant at issue adequately particularized the place to be searched and the things to be seized.

The court sidestepped the question of probable cause, writing that because resolution of this issue doesnt affect the outcome, we simply assume without deciding that the warrant lacked probable cause and was thus constitutionally defective.

Often, the exclusionary rule blocks federal and state prosecutors from using evidence collected in a manner that violates the Fourth Amendment. But there is an exception: If a court finds that law enforcement acted in good faith, the evidence can be presented at trial despite the constitutional violation. Invoking this good faith exception, the Colorado Supreme Court declined to suppress the evidence obtained using the keyword search warrant, concluding that law enforcement obtained and executed the warrant in good faith.

In the Pennsylvania case, Commonwealth v. Kurtz, investigators pursuing a rape investigation used a keyword search warrant to requiring Google to identify Google searches of the address of the crime scene in the hours preceding the crime. In response, Google provided an IP address from which a Google search of the address had been conducted in the relevant time frame. This information was among the evidence used to identify and then to convict the suspect in an October 2020 trial. The suspect then appealed to the Superior Court of Pennsylvania.

In April 2023 the appeals court considered and rejected the suspects assertion that he had a reasonable expectation of privacy in his internet search history:

We conclude that Appellant lacked a reasonable expectation of privacy concerning his Google searches of [the crime scene] address and his IP address. By typing in his search query into the search engine and pressing enter, Appellant affirmatively turned over the contents of his search to Google, a third party, and voluntarily relinquished his privacy interest in the search.

The appeals court then turned to probable cause, writing that even if Appellant did have a constitutionally cognizable privacy interest in his searches of [the] address, we would also find that the Google warrant was supported by probable cause. The appeals court did not address the question of particularity.

The suspect then appealed to the Pennsylvania Supreme Court, which in October 2023 agreed to consider 1) whether there is a reasonable expectation of privacy in internet search queries and the IP address from which those queries are sent, and 2) whether the search warrant met the probable cause requirement. Notably, the good faith exception applied in the Colorado case is not recognized in Pennsylvania state courts in relation to protections in the state constitution from government searches. Thus, if the Pennsylvania Supreme Court determines that the warrant was unconstitutional due to lack of probable cause, the associated evidence will be suppressed.

While Colorado and Pennsylvania appear to be the first states where the states highest court is considering the constitutionality of keyword search warrants, the power of this investigative technique guarantees that this issue will reach other state supreme courts as well. In addition, it will increasingly arise in in federal courts.

At root is the question of whether keyword search warrants are general warrants, and thus by definition unconstitutional. In an amicus brief filed in January with the Pennsylvania Supreme Court, the Electronic Frontier Foundation argues that the answer is yes:

A warrant purporting to authorize a reverse keyword search is a digital analog to a warrant that authorizes officers to search every house in an area of a townsimply on the chance that they might find written material connected to a crime. Like the general warrants and writs of assistance used in England and colonial America, this warrants lack of particularity and overbreadth invites the police to treat it as an excuse to conduct an unconstitutional general search.

Government investigators using keyword search warrants will of course take a different view. They will argue, for instance, that the specific nature of the keywords in the warrant, plus the fact that it is limited to searches conducted in a limited window of time, means that it satisfies the particularity requirement. Of course, identifying the IP addresses that did conduct a Google search using specific keywords also requires determining that a vastly larger number of people did not conduct such a search. Investigators will have to explain why the process of making those negative determinations doesnt render the warrant unconstitutional. Investigators will also argue that the likelihood that perpetrators performed an internet search of the crime scene is high enough to satisfy probable cause.

Eventually, the U.S. Supreme Court may hear a keyword search warrant case, and if so, the resulting ruling could provide important consistency and clarity. Until then, there will likely be a range of outcomes from the different courts that engage with this important set of constitutionality questions.

Read more:
Keyboard search warrants and the Fourth Amendment | Brookings - Brookings Institution

Posted in Fourth Amendment | Comments Off on Keyboard search warrants and the Fourth Amendment | Brookings – Brookings Institution

Just Published: "Terms of Service and Fourth Amendment Rights" – Reason

Posted: at 7:59 pm

In the last year or two, the U.S. Department of Justice has been arguing in federal courts of appeals that Terms of Service can narrow or eliminate Fourth Amendment rights in online accounts. If the government can win on this issue, it will largely defeat any claims to Fourth Amendment protection online. But as I argue in my just-published article, Terms of Service and Fourth Amendment Rights, 172 U. Pa. L. Rev. 287 (2024), these arguments are mistaken. Here's the abstract:

Almost everything you do on the Internet is governed by Terms of Service. The language in Terms of Service typically gives Internet providers broad rights to address potential account misuse. But do these Terms alter Fourth Amendment rights, either diminishing or even eliminating constitutional rights in Internet accounts? In the last five years, many courts have ruled that they do. These courts treat Terms of Service like a rights contract: by agreeing to use an Internet account subject to broad Terms of Service, you give up your Fourth Amendment rights.

This Article argues that the courts are wrong. Terms of Service have little or no effect on Fourth Amendment rights. Fourth Amendment rights are rights against the government, not private parties. Terms of Service can define relationships between private parties, but private contracts cannot define Fourth Amendment rights. This is true across the range of Fourth Amendment doctrines, including the "reasonable expectation of privacy" test, consent, abandonment, third-party consent, and the private search doctrine. Courts that have linked Terms of Service and Fourth Amendment rights are mistaken, and their reasoning should be rejected.

Go here to see the original:
Just Published: "Terms of Service and Fourth Amendment Rights" - Reason

Posted in Fourth Amendment | Comments Off on Just Published: "Terms of Service and Fourth Amendment Rights" – Reason

What is AI? A-to-Z Glossary of Essential AI Terms in 2024 – Tech.co

Posted: at 7:59 pm

A for Artificial General Intelligence (AGI)

AGI is a theoretical type of AI that exhibits human-like intelligence and is generally considered to be as smart or smarter than humans. While the term's origins can be traced back to 1997, the concept of AGI has fallen into the mainstream in recent years as AI developers continue to push the frontier of the technology forward.

For instance, in November 2023 OpenAI revealed it was working on a new AI superintelligence model codenamed Project Q*, which could bring the company closer to realizing AGI. It should be emphasized, however, that AGI is still a hypothetical concept, and many experts are confident the type of AI will not be developed anytime soon, if ever.

Big data refers to large, high-volume datasets, that traditional data processing methods struggle to manage. Big data and AI go hand in hand. The gigantic pool of raw information is vital for AI decision-making, while sophisticated AI algorithms can analyze patterns in datasets and identify valuable insights. When working together, they help users make more insightful revelations, much faster than through traditional methods.

AI bias occurs when an algorithm produces results that are systematically prejudiced against certain types of people. Unfortunately, AI systems have consistently been shown to reflect biases within society by upholding harmful beliefs and encouraging negative stereotypes relating to race, gender, and national identity.

These biases were emphasized in a now-deleted article by Buzzfeed, which displayed AI-generated Barbies from all over the world. The images supported a variety of racial stereotypes, by featuring oversexualized Caribbean dolls, white-washed Barbies from the global south, and Asian dols with inaccurate cultural outfits.

You've probably heard of this one, but it's still important to mention as no AI glossary can be considered complete without a nod to the generative AI chatbot that changed the game when it launched back in November 2022.

In short, ChatGPT is the product that has shifted the AI debate from the server room into the living room. It has done from artificial intelligence what the iPhone did for the mobile phone, bringing the technology into the public eye by virtue of its widely accessible model.

As we recently revealed in our Impact of Technology in the Workplace report, ChatGPT is easily the most widely used AI tool by businesses and may even be the key to unlocking the 4-day workweek.

Its influence may fade over time, but the world of AI will always be viewed through the prism of before and after ChatGPT's birth.

Standing for computing power', compute refers to the computational resources required to train AI models to perform tasks like data processing and making predictions. Typically, the more competing power used to train an LLM, the better it can perform.

Computing power relies on a lot of energy consumption, however, which is sparking concern among environmental activists. For instance, research has revealed that is takes 1GWh of energy to power responses for ChatGPT daily, which is enough energy to power 30,000 US households.

Diffusion models represent a new tier of machine learning, capable of generating superior AI-generated images. These models work by adding noise to a dataset before learning to reverse this process.

By understanding the concept of abstraction behind an image, and creating content in a new way, diffusion models create images that are more sharpened and refined than those made by traditional AI models, and are currently being deployed in a range of AI image tools like Dall-E and Stable Diffusion.

Emergent behavior takes place when AI models produce an unanticipated response outside of its creator's intention. Much of AI is so complex its decision-making processes still can't be understood by humans, even its creators. With AI models as prominent as GPT4 recently exhibiting emergent capabilities, AI researchers are making an increased effort to understand the how and the why behind AI models.

Facial recognition technology relies on AI, machine learning algorithms, and computer vision techniques to process stills and videos of human faces. Since AI can identify intricate facial details more efficiently than manual methods, most facial recognition systems use an artificial neural network called convolutional neural network (CNN) to enhance its accuracy.

Generative AI is a catch-all term that describes any type of AI that produces original content like text, images, and audio clips. Generative AI uses information from LLMs, and other AI models, to create outputs, and powers responses made by chatbots like ChatGPT, Gemini, and Grok,

Chatbots don't always produce correct or sane responses. Oftentimes, AI models generate incorrect information but present it as facts. This is called AI hallucination. Hallucinations take place when the AI model makes predictions based on the dataset it was trained on, instead of retrieving actual facts.

Most AI hallucinations are minor and may even be overlooked by the average user. However, sometimes hallucinations can have dangerous consequences, as false responses produced by ChatGPT have previously been exploited by scammers to trick developers into downloading malicious code.

Bearing similarities to AGI, the intelligence explosion is a hypothetical scenario where AI development becomes uncontrollable and poses a threat to humanity as a result. Also referred to as the singularity, the term represents an existential threat felt by many about the rapid and largely unchecked advancement of the technology.

Jailbreaking is a form of hacking with the goal of bypassing the ethical safeguards of AI models. Specifically, when certain prompts are entered into chatbots, users are able to use them free of any restrictions.

Interestingly, a recent study by Brown University found that using languages like Hmong, Zulu, and Scottish Gaelic was an effective way to jailbreak ChatGPT. Learn how to jailbreak ChatGPT here.

As AI continues to automate manual processes previously performed by humans, the technology is sparking widespread job insecurity among workers. While most workers shouldn't have anything to worry about, our Tech.co Impact of Technology on the Workplace report recently found out that supply chain optimization, legal research, and financial analysis roles are the most likely to be replaced by AI in 2024.

LLMs are a specialist type of AI model that harnesses natural language processing (NLP) to understand and produce natural, humanlike responses. In simple terms, make tools like ChatGPT sound less like a bot, and more like you and me.

Unlike generative AI, LLMs have been designed specifically to handle language-related tasks. Popular examples of LLMs you may have heard of include GPT-4, PaLM 2, and Gemini.

Machine learning is a field of artificial intelligence that allows systems to learn and improve from experience, in a similar way to humans. Specifically, it focuses on the use of data and algorithms in AI, and aims to improve the way AI models can autonomously learn and make decisions in real-world environments.

While the term is often used interchangeably with AI, machine learning is part of the wider AI umbrella, and requires minimal human intervention.

A neural network (NN) is a machine learning model designed to mimic the structure and function of a human brain. An artificial neural network is comprised of multiple tiers and consists of units called artificial neurons, which loosely imitate neurons found in the brain.

Also referred to as deep neural networks, NN's have a variety of useful applications and can be used to improve image recognition, predictive modeling, and natural language processing.

Open-source AI refers to AI technology that has freely available source code. The ultimate aim of open-source AI is to create a culture of collaboration and transparency within the artificial intelligence community, that gives companies and developers greater freedom to innovate with the technology.

Lots of currently available open-source AI products are variations of existing applications., and common product categories include chatbots, machine translation tools, and large language models.

If you're somehow still unfamiliar with tools like Gemini and ChatGPT, a prompt is an instruction or query you enter into chatbots to gain a targeted response. They can exist as stand-alone commands or can be the starting point for longer conversations with AI models.

AI prompts can take any form the user desires, but we found that longer form, detailed input generates the best responses. Using emotive language is another way to generate high-quality answers, according to a recent study by Microsoft.

Find out how to make your work life easier with these 40 ChatGPT prompts designed to save you time at the workplace.

In AI, parameters are a value that measures the behavior of a machine-learning model. In this context, each parameter acts as a variable, determining how the model will convert an input into output. Parameters are one of the most common ways to measure AI performance, and generally speaking, the more an AI model has, the better it will be able to understand complex data patterns and produce more accurate responses.

Quantum AI is the use of quantum computing for the computation of machine learning algorithms. Compared to classical computing, which processes information through 1s and 0s, quantum computing uses a unit called qubits, which represents both 1s and 0s at once. Theoretically, this process could speed up computing speeds dramatically.

In the case of quantum AI, the use of qubits could potentially help produce much more powerful AI models, although many experts believe we're still a way off in achieving this reality.

Red teaming is a structured testing system that aims to find flaws and vulnerabilities in AI models. The cybersecurity term essentially refers to an ethical hacking practice where actors try and simulate an actual cyber attack, to identify potential weak spots in a system and to improve its defenses in the long run.

In the case of AI red teaming, no actual hacking attempt may take place, and red teamers may instead try to test the security of the system by prompting it in a certain way that bypasses any guardrails developers have placed on it, in a similar way to jailbreaking.

There are two basic approaches when it comes to AI learning: supervised learning and unsupervised learning.Also known as supervised machine learning, supervised learning is a method of training where algorithms are trained on input data that has been labeled for a specific output. The aim of the test is to measure how accurately the algorithm can perform on unlabeled data, and the process strives to improve the overall accuracy of AI systems as a whole.

In simple terms, training data is an extremely vast input dataset used to train a machine learning model. Training data is used to teach prediction models using algorithms how to extract features that are relevant to specific user goals, and it's the initial set of data that can then be complimented by subsequent data called testing sets.

It is fundamental to the way AI and machine learning work, and without training data, AI models wouldn't be able to learn, extract useful information, and make predictions, or put simply, exist.

Contrary to supervised learning, unsupervised learning is a type of machine learning where models are given unlabeled, cluttered data and encouraged to discover patterns and insights without any specific framework.

Unsupervised learning models are used for three main tasks, cluttering, which is a data mining technique for grouping unlabeled data, association, another earning method that uses different rules to find relationships between variables, and dimensionality reduction, a learning technique deployed when the number of dimensions in a dataset it too high.

X-risk stands for existential risk. More specifically, the term relates to the existential risk posed by the rapid development of AI. People warning about a potential X-risk event believe that the progress being made in the field of AI may result in human extinction or global catastrophe if left unchecked.

X-risk isn't a fringe belief, though. In fact, in 2023 several tech leaders like Demis Hassabis CEO of DeepMind, Ilya Sutskever Co-Founder and Chief Scientist at OpenAI, and Bill Gates signed a letter warning AI developers about the existential threat posed by AI.

Zero-shot learning is a deep learning problem setup where an AI model is tasked with completing a task without receiving any training examples. In machine learning, zero-shot learning is used to build models for classes that have not yet been labeled for training.

The two stages of zero-shot learning include the training stage, where knowledge is captured, and the interference stage, where information is used to classify examples into a new set of classes.

Here is the original post:

What is AI? A-to-Z Glossary of Essential AI Terms in 2024 - Tech.co

Posted in Artificial General Intelligence | Comments Off on What is AI? A-to-Z Glossary of Essential AI Terms in 2024 – Tech.co

With Sora, OpenAI highlights the mystery and clarity of its mission | The AI Beat – VentureBeat

Posted: at 7:59 pm

Last Thursday, OpenAI released a demo of its new text-to-video model Sora, that can generate videos up to a minute long while maintaining visual quality and adherence to the users prompt.

Perhaps youve seen one, two or 20 examples of the video clips OpenAI provided, from the litter of golden retriever puppies popping their heads out of the snow to the couple walking through the bustling Tokyo street. Maybe your reaction was wonder and awe, or anger and disgust, or worry and concern depending on your view of generative AI overall.

Personally, my reaction was a mix of amazement, uncertainty and good old-fashioned curiosity. Ultimately I, and many others, want to know what is the Sora release really about?

Heres my take: With Sora, OpenAI offers what I think is a perfect example of the companys pervasive air of mystery around its constant releases, particularly just three months after CEO Sam Altmans firing and quick comeback. That enigmatic aura feeds the hype around each of its announcements.

The AI Impact Tour NYC

Well be in New York on February 29 in partnership with Microsoft to discuss how to balance risks and rewards of AI applications. Request an invite to the exclusive event below.

Of course, OpenAI is not open. It offers closed, proprietary models, which makes its offerings mysterious by design. But think about it millions of us are now trying to parse every word around the Sora release, from Altman and many others. We wonder or opine on how the black-box model really works, what data it was trained on, why it was suddenly released now, what it will really be used for, and the consequences of its future development on the industry, the global workforce, society at large, and the environment. All for a demo that will not be released as a product anytime soon its AI hype on steroids.

At the same time, Sora also exemplifies the very un-mysterious, transparent clarity OpenAI has around its mission to develop artificial general intelligence (AGI) and ensure that it benefits all of humanity.

After all, OpenAI said it is sharing Soras research progress early to start working with and getting feedback from people outside of OpenAI and to give the public a sense of what AI capabilities are on the horizon. The title of the Sora technical report, Video generation models as world simulators, shows that this is not a company looking to simply release a text-to-video model for creatives to work with. Instead, this is clearly AI researchers doing what AI researchers do pushing against the edges of the frontier. In OpenAIs case, that push is towards AGI, even if there is no agreed-upon definition of what that means.

That strange duality the mysterious alchemy of OpenAIs current efforts, and unwavering clarity of its long-term mission often gets overlooked and under-analyzed, I believe, as more of the general public becomes aware of its technology and more businesses sign on to use its products.

The OpenAI researchers working on Sora are certainly concerned about the present impact and are being careful about deployment for creative use. For example, Aditya Ramesh, an OpenAI scientist who co-created DALL-E and is on the Sora team, told MIT Technology Review that OpenAI is worried about misuses of fake but photorealistic video. Were being careful about deployment here and making sure we have all our bases covered before we put this in the hands of the general public, he said.

But Ramesh also considers Sora a stepping stone. Were excited about making this step toward AI that can reason about the world like we do, he posted on X.

In January 2023, I spoke to Ramesh for a look back at the evolution DALL-E on the second anniversary of the original DALL-E paper.

I dug up my transcript of that conversation and it turns out that Ramesh was already talking about video. When I asked him what interested him most about working on DALL-E, he said that the aspects of intelligence that are bespoke to vision and what can be done in vision were what he found the most interesting.

Especially with video, he added. You can imagine how a model that would be capable of generating a video could plan across long-time horizons, think about cause and effect, and then reason about things that have happened in the past.

Ramesh also talked, I felt, from the heart about the OpenAI duality. On the one hand, he felt good about exposing more people to what DALL-E could do. I hope that over time, more and more people get to learn about and explore what can be done with AI and that sort of open up this platform where people who want to do things with our technology can can easily access it through through our website and find ways to use it to build things that theyd like to see.

On the other hand, he said that his main interest in DALL-E as a researcher was to push this as far as possible. That is, the team started the DALL-E research project because we had success with GPT-2 and we knew that there was potential in applying the same technology to other modalities and we felt like text-to-image generation was interesting becausewe wanted to see if we trained a model to generate images from text well enough, whether it could do the same kinds of things that humans can in regard to extrapolation and so on.

In the short term, we can look at Sora as a potential creative tool with lots of problems to be solved. But dont be fooled to OpenAI, Sora is not really about video at all.

Whether you think Sora is a data-driven physics engine that is a simulation of many worlds, real or fantastical, like Nvidias Jim Fan, or you think modeling the world for action by generating pixel is as wasteful and doomed to failure as the largely-abandoned idea of analysis by synthesis, like Yann LeCun, I think its clear that looking at Sora simply as a jaw-dropping, powerful video application that plays into all the anger and fear and excitement around todays generative AI misses the duality of OpenAI.

OpenAI is certainly running the current generative AI playbook, with its consumer products, enterprise sales, and developer community-building. But its also using all of that as stepping stone towards developing the power over whatever it believes AGI is, could be, or should be defined as.

So for everyone out there who wonders what Sora is good for, make sure you keep that duality in mind: OpenAI may currently be playing the video game, but it has its eye on a much bigger prize.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Read more:

With Sora, OpenAI highlights the mystery and clarity of its mission | The AI Beat - VentureBeat

Posted in Artificial General Intelligence | Comments Off on With Sora, OpenAI highlights the mystery and clarity of its mission | The AI Beat – VentureBeat

Why, Despite All the Hype We Hear, AI Is Not One of Us – Walter Bradley Center for Natural and Artificial Intelligence

Posted: at 7:59 pm

Artificial Intelligence (AI) systems are inferencing systems. They make decisions based on information. Thats not a particularly controversial point: inference is central to thinking. If AI performs the right types of inference, at the right time, on the right problem, we should view them as thinking machines.

The problem is, AI currently performs the wrong type of inference, on problems selected precisely because this type of inference works well. Ive called this Big Data AI, because the problems AI currently solves can only be cracked if very large repositories of data are available to solve them. ChatGPT is no exception in fact, it drives the point home. Its a continuation of previous innovations of Big Data AI taken to an extreme. The AI scientists dream of general intelligence, often referred to as Artificial General Intelligence (AGI), remains as elusive as ever.

Computer scientists who were not specifically trained on mathematical or philosophical logic probably dont think in terms of inference. Still, it pervades everything we do. In a nutshell, inference in the scientific sense is: given what I know already, and what I see or observe around me, what is proper to conclude? The conclusion is known as the inference, and for any cognitive system its ubiquitous.

For humans, inferring something is like a condition of being awake; we do it constantly, in conversation (what does she mean?), when walking down a street (do I turn here?), and indeed in having any thought where theres an implied question at all. If you try to pay attention to your thoughts for one day one hour youll quickly discover you cant count the number of inferences your brain is making. Inference is cognitive intelligence. Cognitive intelligence is inference.

What difference have 21st-century innovations made?

In the last decade, the computer science community innovated rapidly, and dramatically. These innovations are genuine and importantmake no mistake. In 2012, a team at the University of Toronto led by neural network guru Geoffrey Hinton roundly defeated all competitors at a popular photo recognition competition called ImageNet. The task was to recognize images from a dataset curated from fifteen million high resolution images on Flickr and representing twenty-two thousand classes, or varieties of photos (caterpillars, trees, cars, terrier dogs, etc.).

The system, dubbed AlexNet, after Hintons graduate student Alex Krizhevsky, who largely developed it, used a souped-up version of an old technology: the artificial neural network (ANN), or just neural network. Neural networks were developed in rudimentary form in the 1950s, when AI had just begun. They had been gradually refined and improved over the decades, though they were generally thought to be of little value for much of AIs history.

Moores Law, gave them a boost. As many know, Moores Law isnt a law, but an observation made by Intel co-founder and CEO Gordon Moore in 1965: the number of transistors on a microchip doubles roughly every two years (the other part is that the cost of computers is also halved during that time). Neural networks are computationally expensive on very large datasets, and the catch-22 for many years was that very large datasets are the only datasets they work well on.

But by the 2010s the roughly accurate Moores Law had made deep neural networks, known at that time as convolutional neural networks (CNNs), computationally practical. CPUs were swapped for the more mathematically powerful GPUsalso used in computer game enginesand suddenly CNNs were not just an option, but the go-to technology for AI. Though all the competitors at ImageNet contests used some version of machine learninga subfield of AI that is specifically inductive because it learns from prior examples or observationsthe CNNs were found wholly superior, once the hardware was in place to support the gargantuan computational requirements.

The second major innovation occurred just two years later, when a well-known limitation to neural networks in general was solved or at least partially solved the limitation of overfitting. Overfitting happens when the neural network fits to its training data, and doesnt adequately generalize to its unseen, or test data. Overfitting is bad; it means the system isnt really learning the underlying rule or pattern in the data. Its like someone memorizing the answers to the test without really understanding the questions. The overfitting problem bedeviled early attempts at using neural networks for problems like image recognition (CNNs are also used for face recognition, machine translation between languages, autonomous navigation, and a host of other useful tasks).

In 2014, Geoff Hinton and his team developed a technique known as dropout which helped solve the overfitting problem. While the public consumed the latest smartphones and argued, flirted, and chatted away on myriad social networks and technologies, real innovations on an old AI technology were taking place, all made possible by the powerful combination of talented scientists and engineers, and increasingly powerful computing resources.

There was a catch, however.

Black Boxes and Blind Inferences

Actually, there were two catches. One, it takes quite an imaginative computer scientist to believe that the neural network knows what its classifying or identifying. Its a bunch of math in the background, and relatively simple math at that: mostly matrix multiplication, a technique learned by any undergraduate math student. There are other mathematics operations in neural networks, but its still not string theory. Its the computation of the relatively simple math equations that counts, along with the overall design of the system. Thus,neural networks were performing cognitive feats while not really knowing they were performing anything at all.

This brings us to the second problem, which ended up spawning an entire field itself, known as Explainable AI.

Next: Because AIs dont know why they make decisions, they cant explain them to programmers.

Read the original here:

Why, Despite All the Hype We Hear, AI Is Not One of Us - Walter Bradley Center for Natural and Artificial Intelligence

Posted in Artificial General Intelligence | Comments Off on Why, Despite All the Hype We Hear, AI Is Not One of Us – Walter Bradley Center for Natural and Artificial Intelligence

What is Artificial General Intelligence (AGI) and Why It’s Not Here Yet: A Reality Check for AI Enthusiasts – Unite.AI

Posted: at 7:59 pm

Artificial Intelligence (AI) is everywhere. From smart assistants to self-driving cars, AI systems are transforming our lives and businesses. But what if there was an AI that could do more than perform specific tasks? What if there was a type of AI that could learn and think like a human or even surpass human intelligence?

This is the vision of Artificial General Intelligence (AGI), a hypothetical form of AI that has the potential to accomplish any intellectual task that humans can. AGI is often contrasted with Artificial Narrow Intelligence (ANI), the current state of AI that can only excel at one or a few domains, such as playing chess or recognizing faces. AGI, on the other hand, would have the ability to understand and reason across multiple domains, such as language, logic, creativity, common sense, and emotion.

AGI is not a new concept. It has been the guiding vision of AI research since the earliest days and remains its most divisive idea. Some AI enthusiasts believe that AGI is inevitable and imminent and will lead to a new technological and social progress era. Others are more skeptical and cautious and warn of the ethical and existential risks of creating and controlling such a powerful and unpredictable entity.

But how close are we to achieving AGI, and does it even make sense to try? This is, in fact, an important question whose answer may provide a reality check for AI enthusiasts who are eager to witness the era of superhuman intelligence.

AGI stands apart from current AI by its capacity to perform any intellectual task that humans can, if not surpass them. This distinction is in terms of several key features, including:

While these features are vital for achieving human-like or superhuman intelligence, they remain hard to capture for current AI systems.

Current AI predominantly relies on machine learning, a branch of computer science that enables machines to learn from data and experiences. Machine learning operates through supervised, unsupervised, and reinforcement learning.

Supervised learning involves machines learning from labeled data to predict or classify new data. Unsupervised learning involves finding patterns in unlabeled data, while reinforcement learning centers around learning from actions and feedback, optimizing for rewards, or minimizing costs.

Despite achieving remarkable results in areas like computer vision and natural language processing, current AI systems are constrained by the quality and quantity of training data, predefined algorithms, and specific optimization objectives. They often need help with adaptability, especially in novel situations, and more transparency in explaining their reasoning.

In contrast, AGI is envisioned to be free from these limitations and would not rely on predefined data, algorithms, or objectives but instead on its own learning and thinking capabilities. Moreover, AGI could acquire and integrate knowledge from diverse sources and domains, applying it seamlessly to new and varied tasks. Furthermore, AGI would excel in reasoning, communication, understanding, and manipulating the world and itself.

Realizing AGI poses considerable challenges encompassing technical, conceptual, and ethical dimensions.

For example, defining and measuring intelligence, including components like memory, attention, creativity, and emotion, is a fundamental hurdle. Additionally, modeling and simulating the human brains functions, such as perception, cognition, and emotion, present complex challenges.

Moreover, critical challenges include designing and implementing scalable, generalizable learning and reasoning algorithms and architectures. Ensuring the safety, reliability, and accountability of AGI systems in their interactions with humans and other agents and aligning the values and goals of AGI systems with those of society is also of utmost importance.

Various research directions and paradigms have been proposed and explored in the pursuit of AGI, each with strengths and limitations. Symbolic AI, a classical approach using logic and symbols for knowledge representation and manipulation, excels in abstract and structured problems like mathematics and chess but needs help scaling and integrating sensory and motor data.

Likewise, Connectionist AI, a modern approach employing neural networks and deep learning to process large amounts of data, excels in complex and noisy domains like vision and language but needs help interpreting and generalizations.

Hybrid AI combines symbolic and connectionist AI to leverage its strengths and overcome weaknesses, aiming for more robust and versatile systems. Similarly, Evolutionary AI uses evolutionary algorithms and genetic programming to evolve AI systems through natural selection, seeking novel and optimal solutions unconstrained by human design.

Lastly, Neuromorphic AI utilizes neuromorphic hardware and software to emulate biological neural systems, aiming for more efficient and realistic brain models and enabling natural interactions with humans and agents.

These are not the only approaches to AGI but some of the most prominent and promising ones. Each approach has advantages and disadvantages, and they still need to achieve the generality and intelligence that AGI requires.

While AGI has not been achieved yet, some notable examples of AI systems exhibit certain aspects or features reminiscent of AGI, contributing to the vision of eventual AGI attainment. These examples represent strides toward AGI by showcasing specific capabilities:

AlphaZero, developed by DeepMind, is a reinforcement learning system that autonomously learns to play chess, shogi and Go without human knowledge or guidance. Demonstrating superhuman proficiency, AlphaZero also introduces innovative strategies that challenge conventional wisdom.

Similarly, OpenAI's GPT-3 generates coherent and diverse texts across various topics and tasks. Capable of answering questions, composing essays, and mimicking different writing styles, GPT-3 displays versatility, although within certain limits.

Likewise, NEAT, an evolutionary algorithm created by Kenneth Stanley and Risto Miikkulainen, evolves neural networks for tasks such as robot control, game playing, and image generation. NEAT's ability to evolve network structure and function produces novel and complex solutions not predefined by human programmers.

While these examples illustrate progress toward AGI, they also underscore existing limitations and gaps that necessitate further exploration and development in pursuing true AGI.

AGI poses scientific, technological, social, and ethical challenges with profound implications. Economically, it may create opportunities and disrupt existing markets, potentially increasing inequality. While improving education and health, AGI may introduce new challenges and risks.

Ethically, it could promote new norms, cooperation, and empathy and introduce conflicts, competition, and cruelty. AGI may question existing meanings and purposes, expand knowledge, and redefine human nature and destiny. Therefore, stakeholders must consider and address these implications and risks, including researchers, developers, policymakers, educators, and citizens.

AGI stands at the forefront of AI research, promising a level of intellect surpassing human capabilities. While the vision captivates enthusiasts, challenges persist in realizing this goal. Current AI, excelling in specific domains, must meet AGIs expansive potential.

Numerous approaches, from symbolic and connectionist AI to neuromorphic models, strive for AGI realization. Notable examples like AlphaZero and GPT-3 showcase advancements, yet true AGI remains elusive. With economic, ethical, and existential implications, the journey to AGI demands collective attention and responsible exploration.

Here is the original post:

What is Artificial General Intelligence (AGI) and Why It's Not Here Yet: A Reality Check for AI Enthusiasts - Unite.AI

Posted in Artificial General Intelligence | Comments Off on What is Artificial General Intelligence (AGI) and Why It’s Not Here Yet: A Reality Check for AI Enthusiasts – Unite.AI

Future of Artificial Intelligence: Predictions and Impact on Society – Medriva

Posted: at 7:59 pm

As we stand at the cusp of a new era, Artificial Intelligence (AI) is not just a buzzword in the tech industry but a transformative force anticipated to reshape various aspects of society by 2034. From attaining Artificial General Intelligence (AGI) to the fusion of quantum computing and AI, and the application of AI to neural interface technology, the future of AI promises an exciting blend of advancements and challenges.

By 2034, AI is expected to achieve AGI, meaning it will be capable of learning to perform any job just by being instructed. This evolution represents a significant milestone as it signifies a shift from AIs current specialized applications to a more generalized approach. Furthermore, the fusion of quantum computing and AI, referred to as Quantum AI, is anticipated to usher in a new era of supercomputing and scientific discovery. This fusion will result in unprecedented computational power, enabling us to solve complex problems that are currently beyond our reach.

Another promising area of AI development lies in its application to neural interface technology. AIs potential to enhance cognitive capabilities could revolutionize sectors like healthcare, education, and even our daily lives. For instance, AI algorithms combined with computer vision have greatly improved medical imaging and diagnostics. The global computer vision in healthcare market is projected to surge to US $56.1 billion by 2034, driven by precision medicine and the demand for computer vision systems.

AIs integration into robotics is expected to transform our daily lives. From performing household chores to providing companionship and manual work, robotics and co-bots are poised to become an integral part of our society. In public governance and justice systems, AI raises questions about autonomy, ethics, and surveillance. As AI continues to permeate these sectors, addressing these ethical concerns will be critical.

The automotive industry is another sector where AI is set to make a significant impact. Artificial Intelligence, connectivity, and software-defined vehicles are expected to redefine the future of cars. The projected growth of connected and software-defined vehicles is estimated at a compound annual growth rate of 21.1% between 2024 and 2034, reaching a value of US $700 billion. This growth opens up new revenue streams, including AI assistants offering natural interactions with the vehicles systems and in-car payment systems using biometric security.

AIs impact extends beyond technology and industry, potentially reshaping societal norms and structures. A significant area of discussion is the potential effect of AI on the concept of meritocracy. As AI continues to evolve, it might redefine merit and meritocracy in ways we can only begin to imagine. However, it also poses challenges in terms of potential disparities, biases, and issues of accountability and data hegemony.

As we look forward to the next decade, the future of AI presents both opportunities and challenges. It is an intricate dance of evolution and ethical considerations, of technological advancements and societal impact. As we embrace this future, it is crucial to navigate these waters with foresight and responsibility, ensuring that the benefits of AI are reaped while minimizing its potential adverse effects.

Continued here:

Future of Artificial Intelligence: Predictions and Impact on Society - Medriva

Posted in Artificial General Intelligence | Comments Off on Future of Artificial Intelligence: Predictions and Impact on Society – Medriva