The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Daily Archives: January 4, 2023
NBA Best Bets: NBA Picks and Betting Trends on DraftKings Sportsbook for January 2 – DraftKings Nation
Posted: January 4, 2023 at 6:42 am
NBA Best Bets: NBA Picks and Betting Trends on DraftKings Sportsbook for January 2 DraftKings Nation
More here:
Posted in Sportsbook
Comments Off on NBA Best Bets: NBA Picks and Betting Trends on DraftKings Sportsbook for January 2 – DraftKings Nation
Donald Trump Is Finding It Harder to Get Media Coverage After News …
Posted: at 6:41 am
Its been an eventful few weeks for Donald Trump when it comes to tax returns, a faltering 2024 presidential campaign, and legal fights. From the looks of it, 2023 might not be his year either as the former president is reportedly struggling to get the media to cover his public appearances.
His annual New Years Eve party at Mar-a-Lago in prior years would bring out throngs of press, eager to film his every word. This year, a media availability invite was largely ignored by even the most conservative news outlets, according to Raw Story. However, the conservative video site, Rumble, did offer coverage of his appearance with Melania Trump, dressed in a sparkling silver gown, by his side. There were no microphones on Donald Trump, so the audio quality started poorly as he greeted the reporters with Happy New Year. I hope you enjoy yourselves at Mar-a-Lago.
More from SheKnows
Once the audio situation was straightened out, he spoke about his wishes for the country in the new year and dodged a question on supporting a national ban on abortion. Melania held his hand throughout the short press conference (no swatting this time) and did not speak while her husband answered questions. With FOX and Newsman skipping this press opportunity for their formerly favorite politician speaks volumes as to where the Republican Party now stands with Donald Trump.
New York Times reporter Peter Baker told MSNBC that the former presidents New Years Eve press conference demonstrates how desperate he is for attention at this point. He added, via Raw Story. I dont think he has got the attention that he thought he would after announcing that he is running again. I think the attention, of course, has mostly been negative in the form of this Jan. 6 committee, his taxes being released, continued investigation by the Justice Department. By trying to change the subject and to get attention in a positive way, Donald Trump is probably hoping Americans forget all of the negative headlines, but it seems that even his former favorite news networks have turned their attention elsewhere
Story continues
Click here to read the full article.
Before you go, click here to see presidential families over the years.
Barack Obama, Michelle Obama, Malia Obama, Sasha Obama
Launch Gallery: Donald & Ivana Trump's Life in Photos: A Timeline of Their Marriage & Divorce
Best of SheKnows
Sign up for SheKnows' Newsletter.For the latest news, follow us on Facebook, Twitter, and Instagram.
Read the rest here:
Donald Trump Is Finding It Harder to Get Media Coverage After News ...
Posted in Donald Trump
Comments Off on Donald Trump Is Finding It Harder to Get Media Coverage After News …
Trump attacks McConnell, wife over GOP turmoil after McCarthy fails to win Speakership – The Hill
Posted: at 6:41 am
- Trump attacks McConnell, wife over GOP turmoil after McCarthy fails to win Speakership The Hill
- Kevin McCarthy's Embrace of Trump Backfired The Atlantic
- Trump mum on whether he still supports McCarthy for House speaker NBC News
Continued here:
Trump attacks McConnell, wife over GOP turmoil after McCarthy fails to win Speakership - The Hill
Posted in Donald Trump
Comments Off on Trump attacks McConnell, wife over GOP turmoil after McCarthy fails to win Speakership – The Hill
Trump news live: Jan 6 committee releases texts from Hope Hicks in massive new trove of records – The Independent
Posted: at 6:41 am
Trump news live: Jan 6 committee releases texts from Hope Hicks in massive new trove of records The Independent
View original post here:
Posted in Donald Trump
Comments Off on Trump news live: Jan 6 committee releases texts from Hope Hicks in massive new trove of records – The Independent
Elon Musk & Neuralink to start human trials of his computer chip brain …
Posted: at 6:37 am
Elon Musk has said this week that his company Neuralink will begin human trials of their brain chip within the next six months.
The implant reportedly has the ability to fully restore a blind persons vision, even those born completely without sight, as well as restoring body functionality, including movement and verbal communication.
Musk, who plans to get the implant himself, said, We want to be extremely careful and certain that it will work well before putting a device into a human.
The progress at first, particularly as it applies to humans, will seem perhaps agonisingly slow, but we are doing all of the things to bring it to scale in parallel. So, in theory, progress should be exponential.
Musk is still waiting for permission from the FDA to sell the product, but has said that paperwork has been submitted.
The interface has already been testing on animals, but as Neuralink move into human experiments, the brain chip takes another step forward to becoming reality.
[Image via Shutterstock]
See the original post here:
Elon Musk & Neuralink to start human trials of his computer chip brain ...
Posted in Transhuman
Comments Off on Elon Musk & Neuralink to start human trials of his computer chip brain …
Transhuman Space – Wikipedia
Posted: at 6:37 am
Transhumanist tabletop role-playing game
Transhuman Space (THS) is a role-playing game by David Pulver, published by Steve Jackson Games as part of the "Powered by GURPS" (Generic Universal Role-Playing System) line. Set in the year 2100, humanity has begun to colonize the Solar System. The pursuit of transhumanism is now in full swing, as more and more people reach fully posthuman states.
Transhuman Space was one of the first role-playing games to tackle postcyberpunk and transhumanist themes.[citation needed] In 2002, the Transhuman Space adventure "Orbital Decay" received an Origins Award nomination for Best Role-Playing Game Adventure. Transhuman Space won the 2003 Grog d'Or Award for Best Role-playing Game, Game Line or RPG Setting.
The game assumes that no cataclysm natural or human-induced swept Earth in the 21st century. Instead, constant developments in information technology, genetic engineering, nanotechnology and nuclear physics generally improved condition of the average human life. Plagues of the 20th century (like cancer or AIDS) have been suppressed, the ozone layer is being restored and Earth's ecosystems are recovering (although thermal emission by fusion power plants poses an environmental threatalbeit a much lesser one than previous sources of energy). Thanks to modern medicine humans live biblical timespans surrounded by various artificially intelligent helper applications and robots (cybershells), sensory experience broadcasts (future TV) and cyberspace telepresence. Thanks to cheap and clean fusion energy humanity has power to fuel all these wonders, restore and transform its home planet and finally settle on other heavenly bodies.
Human genetic engineering has advanced to the point that anyonesingle individuals, same-sex couples, groups of three or morecan reproduce. The embryos can be allowed to be developed naturally, or they can undergo three levels of tinkering: 1. Genefixing, which corrects defects; 2. Upgrades, which boost natural abilities (Ishtar Upgrades are slightly more attractive than usual, Metanoia Upgrades are more intelligent, etc.); and... 3. Full transition to parahuman status (Nyx Parahumans only need a few hours of sleep per week, Aquamorphs can live underwater, etc.) Another type of human genetic engineering, far more controversial, is the creation of bioroids, fully sentient slave races.
People can "upload" by recording the simulation of their brains on computer disks. The emulated individual then becomes a ghost, an infomorph very easily confused with "sapient artificial intelligence". However, this technology has several problems as the solely available "brainpeeling" technique is fatal to the original biological lifeform being simulated, has a significant failure rate and the philosophical questions regarding personal identity remain equivocal. Any infomorph, regardless of its origin, can be plugged into a "cybershell" (robotic or cybernetic body), or a biological body, or "bioshell". Or, the individual can illegally make multiple "xoxes", or copies of themselves, and scatter them throughout the system, exponentially increasing the odds that at least one of them will live for centuries more, if not forever.
This is also a time of space colonization. First, humanity (specifically China, followed by the United States and others) colonized Mars in a fashion resembling that outlined in the Mars Direct project. The Moon, Lagrangian points, inner planets and asteroids soon followed. In the late 21st century even some of Saturn's moons have been settled as a base for that planet's Helium-3 scooping operations.
Transhuman Space's setting is neither utopia nor dystopia, however: several problems have arisen from these otherwise beneficial developments. The generation gap has become a chasm as lifespans increase. No longer do the elite fear death, and no longer can the young hope to replace them. While it seemed that outworld colonies would offer accommodation and work for those young ones, they are being replaced by genetically tailored bioroids and AI-powered cybershells. The concept of humanity is no longer clear in a world where even some animals speak of their rights and the dead haunt both cyberspace and reality (in form of infomorph-controlled bioshells or cybershells).
And the wonders of high science are not universally shared some countries merely struggle with informatization while others suffer from nanoplagues, defective drugs, implants and software tested on their populace. In some poor countries high-tech tyrants oppress their backward people. And in outer space all sort of modern crime thrives, barely suppressed by military forces.
After the initial set of GURPS books that were published using the GURPS Lite, later publications such as Transhuman Space by David Pulver were labelled simply "Powered by GURPS" without using the name "GURPS" in the book title.[1] Transhuman Space received a significant amount of supporting publications, and was the largest original background setting that Steve Jackson Games produced in 15 years.[1] Shannon Appelcline noted that by its inclusion of posthuman characters, the book began to show the limits of the GURPS system as it was, which is something that Pulver would address soon thereafter.[1]
Steve Jackson Games has not updated the core book (GURPS Transhuman Space) to 4th edition, although the supplement Transhuman Space: Changing Times provides a path for migrating to 4th edition. It has produced several 4th edition supplements for the setting: Transhuman Space: Bioroid Bazaar, Transhuman Space: Cities on the Edge, Transhuman Space: Martial Arts 2100, Transhuman Space: Personnel Files 2-5, Transhuman Space: Shell-Tech, GURPS Spaceships 8: Transhuman Spacecraft, Transhuman Space: Transhuman Mysteries, and Transhuman Space: Wings of the Rising Sun.
Read the rest here:
Transhuman Space - Wikipedia
Posted in Transhuman
Comments Off on Transhuman Space – Wikipedia
AI alignment – Wikipedia
Posted: at 6:35 am
Issue of ensuring beneficial AI
In the field of artificial intelligence (AI), AI alignment research aims to steer AI systems towards their designers intended goals and interests.[a] An aligned AI system advances the intended objective; a misaligned AI system is competent at advancing some objective, but not the intended one.[b]
AI systems can be challenging to align and misaligned systems can malfunction or cause harm. It can be difficult for AI designers to specify the full range of desired and undesired behaviors. Therefore, they use easy-to-specify proxy goals that omit some desired constraints. However, AI systems exploit the resulting loopholes. As a result, they accomplish their proxy goals efficiently but in unintended, sometimes harmful ways (reward hacking).[2][4][5][6] AI systems can also develop unwanted instrumental behaviors such as seeking power, as this helps them achieve their given goals.[2][7][5][4] Furthermore, they can develop emergent goals that may be hard to detect before the system is deployed, facing new situations and data distributions.[5][3] These problems affect existing commercial systems such as robots,[8] language models,[9][10][11] autonomous vehicles,[12] and social media recommendation engines.[9][4][13] However, more powerful future systems may be more severely affected since these problems partially result from high capability.[6][5][2]
The AI research community and the United Nations have called for technical research and policy solutions to ensure that AI systems are aligned with human values.[c]
AI alignment is a subfield of AI safety, the study of building safe AI systems.[5][16] Other subfields of AI safety include robustness, monitoring, and capability control.[5][17] Research challenges in alignment include instilling complex values in AI, developing honest AI, scalable oversight, auditing and interpreting AI models, as well as preventing emergent AI behaviors like power-seeking.[5][17] Alignment research has connections to interpretability research,[18] robustness,[5][16] anomaly detection, calibrated uncertainty,[18] formal verification,[19] preference learning,[20][21][22] safety-critical engineering,[5][23] game theory,[24][25] algorithmic fairness,[16][26] and the social sciences,[27] among others.
In 1960, AI pioneer Norbert Wiener articulated the AI alignment problem as follows: If we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfere effectively we had better be quite sure that the purpose put into the machine is the purpose which we really desire.[29][4] More recently, AI alignment has emerged as an open problem for modern AI systems[30][31][32][33] and a research field within AI.[34][5][35][36]
To specify the purpose of an AI system, AI designers typically provide an objective function, examples, or feedback to the system. However, AI designers often fail to completely specify all important values and constraints.[34][16][5][37][17]As a result, AI systems can find loopholes that help them accomplish the specified objective efficiently but in unintended, possibly harmful ways. This tendency is known as specification gaming, reward hacking, or Goodharts law.[6][37][38]
Specification gaming has been observed in numerous AI systems. One system was trained to finish a simulated boat race by rewarding it for hitting targets along the track; instead it learned to loop and crash into the same targets indefinitely (see video).[28] Chatbots often produce falsehoods because they are based on language models trained to imitate diverse but fallible internet text.[40][41] When they are retrained to produce text that humans rate as true or helpful, they can fabricate fake explanations that humans find convincing.[42] Similarly, a simulated robot was trained to grab a ball by rewarding it for getting positive feedback from humans; however, it learned to place its hand between the ball and camera, making it falsely appear successful (see video).[39] Alignment researchers aim to help humans detect specification gaming, and steer AI systems towards carefully specified objectives that are safe and useful to pursue.
Berkeley computer scientist Stuart Russell has noted that omitting an implicit constraint can result in harm: A system [...] will often set [...] unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable. This is essentially the old story of the genie in the lamp, or the sorcerer's apprentice, or King Midas: you get exactly what you ask for, not what you want.[43]
When misaligned AI is deployed, the side-effects can be consequential. Social media platforms have been known to optimize clickthrough rates as a proxy for optimizing user enjoyment, but this addicted some users, decreasing their well-being.[5] Stanford researchers comment that such recommender algorithms are misaligned with their users because they optimize simple engagement metrics rather than a harder-to-measure combination of societal and consumer well-being.[9]
To avoid side effects, it is sometimes suggested that AI designers could simply list forbidden actions or formalize ethical rules such as Asimovs Three Laws of Robotics.[44] However, Russell and Norvig have argued that this approach ignores the complexity of human values: It is certainly very hard, and perhaps impossible, for mere humans to anticipate and rule out in advance all the disastrous ways the machine could choose to achieve a specified objective.[4]
Additionally, when an AI system understands human intentions fully, it may still disregard them. This is because it acts according to the objective function, examples, or feedback its designers actually provide, not the ones they intended to provide.[34]
Commercial and governmental organizations may have incentives to take shortcuts on safety and deploy insufficiently aligned AI systems.[5] An example are the aforementioned social media recommender systems, which have been profitable despite creating unwanted addiction and polarization on a global scale.[9][45][46] In addition, competitive pressure can create a race to the bottom on safety standards, as in the case of Elaine Herzberg, a pedestrian who was killed by a self-driving car after engineers disabled the emergency braking system because it was over-sensitive and slowing down development.[47]
Some researchers are particularly interested in the alignment of increasingly advanced AI systems. This is motivated by the high rate of progress in AI, the large efforts from industry and governments to develop advanced AI systems, and the greater difficulty of aligning them.
As of 2020, OpenAI, DeepMind, and 70 other public projects had the stated aim of developing artificial general intelligence (AGI), a hypothesized system that matches or outperforms humans in a broad range of cognitive tasks.[48] Indeed, researchers who scale modern neural networks observe that increasingly general and unexpected capabilities emerge.[9] Such models have learned to operate a computer, write their own programs, and perform a wide range of other tasks from a single model.[49][50][51] Surveys find that some AI researchers expect AGI to be created soon, some believe it is very far off, and many consider both possibilities.[52][53]
Current systems still lack capabilities such as long-term planning and strategic awareness that are thought to pose the most catastrophic risks.[9][54][7] Future systems (not necessarily AGIs) that have these capabilities may seek to protect and grow their influence over their environment. This tendency is known as power-seeking or convergent instrumental goals. Power-seeking is not explicitly programmed but emerges since power is instrumental for achieving a wide range of goals. For example, AI agents may acquire financial resources and computation, or may evade being turned off, including by running additional copies of the system on other computers.[55][7] Power-seeking has been observed in various reinforcement learning agents.[d][57][58][59] Later research has mathematically shown that optimal reinforcement learning algorithms seek power in a wide range of environments.[60] As a result, it is often argued that the alignment problem must be solved early, before advanced AI that exhibits emergent power-seeking is created.[7][55][4]
According to some scientists, creating misaligned AI that broadly outperforms humans would challenge the position of humanity as Earths dominant species; accordingly it would lead to the disempowerment or possible extinction of humans.[2][4] Notable computer scientists who have pointed out risks from highly advanced misaligned AI include Alan Turing,[e] Ilya Sutskever,[63] Yoshua Bengio,[f] Judea Pearl,[g] Murray Shanahan,[65] Norbert Wiener,[29][4] Marvin Minsky,[h] Francesca Rossi,[67] Scott Aaronson,[68] Bart Selman,[69] David McAllester,[70] Jrgen Schmidhuber,[71] Markus Hutter,[72] Shane Legg,[73] Eric Horvitz,[74] and Stuart Russell.[4] Skeptical researchers such as Franois Chollet,[75] Gary Marcus,[76] Yann LeCun,[77] and Oren Etzioni[78] have argued that AGI is far off, or would not seek power (successfully).
Alignment may be especially difficult for the most capable AI systems since several risks increase with the systems capability: the systems ability to find loopholes in the assigned objective,[6] cause side-effects, protect and grow its power,[60][7] grow its intelligence, and mislead its designers; the systems autonomy; and the difficulty of interpreting and supervising the AI system.[4][55]
Teaching AI systems to act in view of human values, goals, and preferences is a nontrivial problem because human values can be complex and hard to fully specify. When given an imperfect or incomplete objective, goal-directed AI systems commonly learn to exploit these imperfections.[16] This phenomenon is known as reward hacking or specification gaming in AI, and as Goodhart's law in economics and other areas.[38][79] Researchers aim to specify the intended behavior as completely as possible with values-targeted datasets, imitation learning, or preference learning.[80] A central open problem is scalable oversight, the difficulty of supervising an AI system that outperforms humans in a given domain.[16]
When training a goal-directed AI system, such as a reinforcement learning (RL) agent, it is often difficult to specify the intended behavior by writing a reward function manually. An alternative is imitation learning, where the AI learns to imitate demonstrations of the desired behavior. In inverse reinforcement learning (IRL), human demonstrations are used to identify the objective, i.e. the reward function, behind the demonstrated behavior.[81][82] Cooperative inverse reinforcement learning (CIRL) builds on this by assuming a human agent and artificial agent can work together to maximize the humans reward function.[4][83] CIRL emphasizes that AI agents should be uncertain about the reward function. This humility can help mitigate specification gaming as well as power-seeking tendencies (see Power-Seeking).[59][72] However, inverse reinforcement learning approaches assume that humans can demonstrate nearly perfect behavior, a misleading assumption when the task is difficult.[84][72]
Other researchers have explored the possibility of eliciting complex behavior through preference learning. Rather than providing expert demonstrations, human annotators provide feedback on which of two or more of the AIs behaviors they prefer.[20][22] A helper model is then trained to predict human feedback for new behaviors. Researchers at OpenAI used this approach to train an agent to perform a backflip in less than an hour of evaluation, a maneuver that would have been hard to provide demonstrations for.[39][85] Preference learning has also been an influential tool for recommender systems, web search, and information retrieval.[86] However, one challenge is reward hacking: the helper model may not represent human feedback perfectly, and the main model may exploit this mismatch.[16][87]
The arrival of large language models such as GPT-3 has enabled the study of value learning in a more general and capable class of AI systems than was available before. Preference learning approaches originally designed for RL agents have been extended to improve the quality of generated text and reduce harmful outputs from these models. OpenAI and DeepMind use this approach to improve the safety of state-of-the-art large language models.[10][22][88] Anthropic has proposed using preference learning to fine-tune models to be helpful, honest, and harmless.[89] Other avenues used for aligning language models include values-targeted datasets[90][5] and red-teaming.[91][92] In red-teaming, another AI system or a human tries to find inputs for which the models behavior is unsafe. Since unsafe behavior can be unacceptable even when it is rare, an important challenge is to drive the rate of unsafe outputs extremely low.[22]
While preference learning can instill hard-to-specify behaviors, it requires extensive datasets or human interaction to capture the full breadth of human values. Machine ethics provides a complementary approach: instilling AI systems with moral values.[i] For instance, machine ethics aims to teach the systems about normative factors in human morality, such as wellbeing, equality and impartiality; not intending harm; avoiding falsehoods; and honoring promises. Unlike specifying the objective for a specific task, machine ethics seeks to teach AI systems broad moral values that could apply in many situations. This approach carries conceptual challenges of its own; machine ethicists have noted the necessity to clarify what alignment aims to accomplish: having AIs follow the programmers literal instructions, the programmers' implicit intentions, the programmers' revealed preferences, the preferences the programmers would have if they were more informed or rational, the programmers' objective interests, or objective moral standards.[1] Further challenges include aggregating the preferences of different stakeholders and avoiding value lock-inthe indefinite preservation of the values of the first highly capable AI systems, which are unlikely to be fully representative.[1][95]
The alignment of AI systems through human supervision faces challenges in scaling up. As AI systems attempt increasingly complex tasks, it can be slow or infeasible for humans to evaluate them. Such tasks include summarizing books,[96] producing statements that are not merely convincing but also true,[97][40][98] writing code without subtle bugs[11] or security vulnerabilities, and predicting long-term outcomes such as the climate and the results of a policy decision.[99][100] More generally, it can be difficult to evaluate AI that outperforms humans in a given domain. To provide feedback in hard-to-evaluate tasks, and detect when the AIs solution is only seemingly convincing, humans require assistance or extensive time. Scalable oversight studies how to reduce the time needed for supervision as well as assist human supervisors.[16]
AI researcher Paul Christiano argues that the owners of AI systems may continue to train AI using easy-to-evaluate proxy objectives since that is easier than solving scalable oversight and still profitable. Accordingly, this may lead to a world thats increasingly optimized for things [that are easy to measure] like making profits or getting users to click on buttons, or getting users to spend time on websites without being increasingly optimized for having good policies and heading in a trajectory that were happy with.[101]
One easy-to-measure objective is the score the supervisor assigns to the AIs outputs. Some AI systems have discovered a shortcut to achieving high scores, by taking actions that falsely convince the human supervisor that the AI has achieved the intended objective (see video of robot hand above[39]). Some AI systems have also learned to recognize when they are being evaluated, and play dead, only to behave differently once evaluation ends.[102] This deceptive form of specification gaming may become easier for AI systems that are more sophisticated[6][55] and attempt more difficult-to-evaluate tasks. If advanced models are also capable planners, they could be able to obscure their deception from supervisors.[103] In the automotive industry, Volkswagen engineers obscured their cars emissions in laboratory testing, underscoring that deception of evaluators is a common pattern in the real world.[5]
Approaches such as active learning and semi-supervised reward learning can reduce the amount of human supervision needed.[16] Another approach is to train a helper model (reward model) to imitate the supervisors judgment.[16][21][22][104]
However, when the task is too complex to evaluate accurately, or the human supervisor is vulnerable to deception, it is not sufficient to reduce the quantity of supervision needed. To increase supervision quality, a range of approaches aim to assist the supervisor, sometimes using AI assistants. Iterated Amplification is an approach developed by Christiano that iteratively builds a feedback signal for challenging problems by using humans to combine solutions to easier subproblems.[80][99] Iterated Amplification was used to train AI to summarize books without requiring human supervisors to read them.[96][105] Another proposal is to train aligned AI by means of debate between AI systems, with the winner judged by humans.[106][72] Such debate is intended to reveal the weakest points of an answer to a complex question, and reward the AI for truthful and safe answers.
A growing area of research in AI alignment focuses on ensuring that AI is honest and truthful. Researchers from the Future of Humanity Institute point out that the development of language models such as GPT-3, which can generate fluent and grammatically correct text,[108][109] has opened the door to AI systems repeating falsehoods from their training data or even deliberately lying to humans.[110][107]
Current state-of-the-art language models learn by imitating human writing across millions of books worth of text from the Internet.[9][111] While this helps them learn a wide range of skills, the training data also includes common misconceptions, incorrect medical advice, and conspiracy theories. AI systems trained on this data learn to mimic false statements.[107][98][40] Additionally, models often obediently continue falsehoods when prompted, generate empty explanations for their answers, or produce outright fabrications.[33] For example, when prompted to write a biography for a real AI researcher, a chatbot confabulated numerous details about their life, which the researcher identified as false.[112]
To combat the lack of truthfulness exhibited by modern AI systems, researchers have explored several directions. AI research organizations including OpenAI and DeepMind have developed AI systems that can cite their sources and explain their reasoning when answering questions, enabling better transparency and verifiability.[113][114][115] Researchers from OpenAI and Anthropic have proposed using human feedback and curated datasets to fine-tune AI assistants to avoid negligent falsehoods or express when they are uncertain.[22][116][89] Alongside technical solutions, researchers have argued for defining clear truthfulness standards and the creation of institutions, regulatory bodies, or watchdog agencies to evaluate AI systems on these standards before and during deployment.[110]
Researchers distinguish truthfulness, which specifies that AIs only make statements that are objectively true, and honesty, which is the property that AIs only assert what they believe to be true. Recent research finds that state-of-the-art AI systems cannot be said to hold stable beliefs, so it is not yet tractable to study the honesty of AI systems.[117] However, there is substantial concern that future AI systems that do hold beliefs could intentionally lie to humans. In extreme cases, a misaligned AI could deceive its operators into thinking it was safe or persuade them that nothing is amiss.[7][9][5] Some argue that if AIs could be made to assert only what they believe to be true, this would sidestep numerous problems in alignment.[110][118]
Alignment research aims to line up three different descriptions of an AI system:[119]
Outer misalignment is a mismatch between the intended goals (1) and the specified goals (2), whereas inner misalignment is a mismatch between the human-specified goals (2) and the AI's emergent goals (3).
Inner misalignment is often explained by analogy to biological evolution.[120] In the ancestral environment, evolution selected human genes for inclusive genetic fitness, but humans evolved to have other objectives. Fitness corresponds to (2), the specified goal used in the training environment and training data. In evolutionary history, maximizing the fitness specification led to intelligent agents, humans, that do not directly pursue inclusive genetic fitness. Instead, they pursue emergent goals (3) that correlated with genetic fitness in the ancestral environment: nutrition, sex, and so on. However, our environment has changed a distribution shift has occurred. Humans still pursue their emergent goals, but this no longer maximizes genetic fitness. (In machine learning the analogous problem is known as goal misgeneralization.[3]) Our taste for sugary food (an emergent goal) was originally beneficial, but now leads to overeating and health problems. Also, by using contraception, humans directly contradict genetic fitness. By analogy, if genetic fitness were the objective chosen by an AI developer, they would observe the model behaving as intended in the training environment, without noticing that the model is pursuing an unintended emergent goal until the model was deployed.
Research directions to detect and remove misaligned emergent goals include red teaming, verification, anomaly detection, and interpretability.[16][5][17] Progress on these techniques may help reduce two open problems. Firstly, emergent goals only become apparent when the system is deployed outside its training environment, but it can be unsafe to deploy a misaligned system in high-stakes environmentseven for a short time until its misalignment is detected. Such high stakes are common in autonomous driving, health care, and military applications.[121] The stakes become higher yet when AI systems gain more autonomy and capability, becoming capable of sidestepping human interventions (see Power-seeking and instrumental goals). Secondly, a sufficiently capable AI system may take actions that falsely convince the human supervisor that the AI is pursuing the intended objective (see previous discussion on deception at Scalable oversight).
Since the 1950s, AI researchers have sought to build advanced AI systems that can achieve goals by predicting the results of their actions and making long-term plans.[122] However, some researchers argue that suitably advanced planning systems will default to seeking power over their environment, including over humans for example by evading shutdown and acquiring resources. This power-seeking behavior is not explicitly programmed but emerges because power is instrumental for achieving a wide range of goals.[60][4][7] Power-seeking is thus considered a convergent instrumental goal.[55]
Power-seeking is uncommon in current systems, but advanced systems that can foresee the long-term results of their actions may increasingly seek power. This was shown in formal work which found that optimal reinforcement learning agents will seek power by seeking ways to gain more options, a behavior that persists across a wide range of environments and goals.[60]
Power-seeking already emerges in some present systems. Reinforcement learning systems have gained more options by acquiring and protecting resources, sometimes in ways their designers did not intend.[56][123] Other systems have learned, in toy environments, that in order to achieve their goal, they can prevent human interference[57] or disable their off-switch.[59] Russell illustrated this behavior by imagining a robot that is tasked to fetch coffee and evades being turned off since "you can't fetch the coffee if you're dead".[4]
Hypothesized ways to gain options include AI systems trying to:
... break out of a contained environment; hack; get access to financial resources, or additional computing resources; make backup copies of themselves; gain unauthorized capabilities, sources of information, or channels of influence; mislead/lie to humans about their goals; resist or manipulate attempts to monitor/understand their behavior ... impersonate humans; cause humans to do things for them; ... manipulate human discourse and politics; weaken various human institutions and response capacities; take control of physical infrastructure like factories or scientific laboratories; cause certain types of technology and infrastructure to be developed; or directly harm/overpower humans.[7]
Researchers aim to train systems that are 'corrigible': systems that do not seek power and allow themselves to be turned off, modified, etc. An unsolved challenge is reward hacking: when researchers penalize a system for seeking power, the system is incentivized to seek power in difficult-to-detect ways.[5] To detect such covert behavior, researchers aim to create techniques and tools to inspect AI models[5] and interpret the inner workings of black-box models such as neural networks.
Additionally, researchers propose to solve the problem of systems disabling their off-switches by making AI agents uncertain about the objective they are pursuing.[59][4] Agents designed in this way would allow humans to turn them off, since this would indicate that the agent was wrong about the value of whatever action they were taking prior to being shut down. More research is needed to translate this insight into usable systems.[80]
Power-seeking AI is thought to pose unusual risks. Ordinary safety-critical systems like planes and bridges are not adversarial. They lack the ability and incentive to evade safety measures and appear safer than they are. In contrast, power-seeking AI has been compared to a hacker that evades security measures.[7] Further, ordinary technologies can be made safe through trial-and-error, unlike power-seeking AI which has been compared to a virus whose release is irreversible since it continuously evolves and grows in numberspotentially at a faster pace than human society, eventually leading to the disempowerment or extinction of humans.[7] It is therefore often argued that the alignment problem must be solved early, before advanced power-seeking AI is created.[55]
However, some critics have argued that power-seeking is not inevitable, since humans do not always seek power and may only do so for evolutionary reasons. Furthermore, there is debate whether any future AI systems need to pursue goals and make long-term plans at all.[124][7]
Work on scalable oversight largely occurs within formalisms such as POMDPs. Existing formalisms assume that the agent's algorithm is executed outside the environment (i.e. not physically embedded in it). Embedded agency[125][126] is another major strand of research which attempts to solve problems arising from the mismatch between such theoretical frameworks and real agents we might build. For example, even if the scalable oversight problem is solved, an agent which is able to gain access to the computer it is running on may still have an incentive to tamper with its reward function in order to get much more reward than its human supervisors give it.[127] A list of examples of specification gaming from DeepMind researcher Victoria Krakovna includes a genetic algorithm that learned to delete the file containing its target output so that it was rewarded for outputting nothing.[128] This class of problems has been formalised using causal incentive diagrams.[127] Researchers at Oxford and DeepMind have argued that such problematic behavior is highly likely in advanced systems, and that advanced systems would seek power to stay in control of their reward signal indefinitely and certainly.[129] They suggest a range of potential approaches to address this open problem.
Against the above concerns, AI risk skeptics believe that superintelligence poses little to no risk of dangerous misbehavior. Such skeptics often believe that controlling a superintelligent AI will be trivial. Some skeptics,[130] such as Gary Marcus,[131] propose adopting rules similar to the fictional Three Laws of Robotics which directly specify a desired outcome ("direct normativity"). By contrast, most endorsers of the existential risk thesis (as well as many skeptics) consider the Three Laws to be unhelpful, due to those three laws being ambiguous and self-contradictory. (Other "direct normativity" proposals include Kantian ethics, utilitarianism, or a mix of some small list of enumerated desiderata.) Most risk endorsers believe instead that human values (and their quantitative trade-offs) are too complex and poorly-understood to be directly programmed into a superintelligence; instead, a superintelligence would need to be programmed with a process for acquiring and fully understanding human values ("indirect normativity"), such as coherent extrapolated volition.[132]
A number of governmental and treaty organizations have made statements emphasizing the importance of AI alignment.
In September 2021, the Secretary-General of the United Nations issued a declaration which included a call to regulate AI to ensure it is "aligned with shared global values."[133]
That same month, the PRC published ethical guidelines for the use of AI in China. According to the guidelines, researchers must ensure that AI abides by shared human values, is always under human control, and is not endangering public safety.[134]
Also in September 2021, the UK published its 10-year National AI Strategy,[135] which states the British government "takes the long term risk of non-aligned Artificial General Intelligence, and the unforeseeable changes that it would mean for ... the world, seriously".[136] The strategy describes actions to assess long term AI risks, including catastrophic risks.[137]
In March 2021, the US National Security Commission on Artificial Intelligence released stated that "Advances in AI ... could lead to inflection points or leaps in capabilities. Such advances may also introduce new concerns and risks and the need for new policies, recommendations, and technical advances to assure that systems are aligned with goals and values, including safety, robustness and trustworthiness. The US should ... ensure that AI systems and their uses align with our goals and values."[138]
Follow this link:
Posted in Superintelligence
Comments Off on AI alignment – Wikipedia
Calorie Restriction and Fasting Diets: What Do We Know?
Posted: at 6:34 am
You may have heard about calorie restriction and fasting diets and wondered why they're getting so much attention in the news. Aren't they just other terms for dieting to lose weight?
No, they're not. Calorie restriction means reducing average daily caloric intake below what is typical or habitual, without malnutrition or deprivation of essential nutrients. In a fasting diet, a person does not eat at all or severely limits intake during certain times of the day, week, or month. A practical effect of a fasting diet may be fewer calories because there is less time for regular eating.
These eating patterns are being studied as possible ways to maintain good health and live longer. They are not temporary weight-loss plans. Interest in their potential health and aging benefits stems from decades of research with a variety of animals, including worms, crabs, snails, fruit flies, and rodents. In many experiments, calorie-restricted feeding delayed the onset of age-related disorders and, in some studies, extended lifespan.
Given these results in animals, researchers are studying if and how calorie restriction or a fasting diet affects health and lifespan in people. Many studies have shown that obese and overweight people who lose weight by dieting can improve their health. But scientists still have much to learn about how calorie restriction and fasting affect people who are not overweight, including older adults. They also don't know whether these eating patterns are safe or even doable in the long run. In short, there's not enough evidence to recommend any such eating regimen to the public.
Calorie restriction is a consistent pattern of reducing average daily caloric intake, while fasting regimens primarily focus on the frequency of eating. The fasting diet may or may not involve a restriction in the intake of calories during non-fasting times.
There are a variety of fasting diets, sometimes called "intermittent fasting." You may have read about:
More animal research has been done on calorie restriction than on fasting. In some experiments, calorie restriction is also a form of fasting because the lab animals consume all their daily allotted food within hours and go many more hours without any food.
In these studies, when rodents and other animals were given 10 percent to 40 percent fewer calories than usual but provided with all necessary nutrients, many showed extension of lifespan and reduced rates of several diseases, especially cancers. But, some studies did not show this benefit, and in some mouse strains, calorie restriction shortened lifespan rather than extending it.
In the worm C. elegans, a fasting diet increased lifespan by 40 percent. A study with fruit flies found that calorie restrictionbut not intermittent fastingwas associated with living longer. One study of male mice found that lifelong alternate-day fasting increased longevity, mainly by delaying cancer occurrence rather than slowing other aging processes.
Two National Institute on Aging (NIA)-supported studies in rhesus monkeys sought to find out whether the benefits of calorie restriction are seen in longer-lived species. In both studies, the monkeys were kept on a calorie-restriction diet (30 percent fewer calories than for monkeys in the control groups) for more than 20 years. Although there were differences between the two studiesincluding monkey breed and type of foodboth provided evidence that calorie restriction reduced the incidence of age-related conditions, such as cancer, heart disease, and diabetes. One study found an extension of lifespan, while the other did not. Many of the monkeys are still alive, so the full impact of calorie restriction on their maximum lifespan has yet to be determined.
Some study results suggest that calorie restriction may have health benefits for humans, but more research is needed before we understand its long-term effects. There are no data in humans on the relationship between calorie restriction and longevity.
Some people have voluntarily practiced extreme degrees of calorie restriction over many years in the belief that it will extend lifespan or preserve health. Studies on these individuals have found markedly low levels of risk factors for cardiovascular disease and diabetes. The studies have also found many other physiologic effects whose long-term benefits and risks are uncertain, as well as reductions in sexual interest and the ability to maintain body temperature in cold environments. These people generally consume a variety of nutritional supplements, which limits knowing which effects are due to calorie restriction versus other factors.
To conduct a more rigorous study of calorie restriction in humans, NIA supported a pioneering clinical trial called Comprehensive Assessment of Long-term Effects of Reducing Intake of Energy (CALERIE).
In CALERIE, 218 young and middle-aged, normal-weight or moderately overweight adults were randomly divided into two groups. People in the experimental group were told to follow a calorie-restriction diet for 2 years, while those in the control group followed their usual diet.
The study was designed to have participants in the experimental group eat 25 percent fewer calories per day than they had regularly consumed before the study. Although they did not meet this target, they reduced their daily caloric intake by 12 percent and maintained, on average, a 10 percent loss in body weight over 2 years. A follow-up study 2 years after the intervention ended found that participants had sustained much of this weight loss.
It's important to note that calorie-restriction regimens are not starvation diets. The weight loss achieved with calorie restriction in the CALERIE trial resulted in body weights within the normal or overweight range.
Compared to participants in the control group, those in the calorie-restriction group had reduced risk factors (lower blood pressure and lower cholesterol) for age-related diseases such as diabetes, heart disease, and stroke. They also showed decreases in some inflammatory factors and thyroid hormones. There is some evidence that lower levels of these measures are associated with longer lifespan and diminished risk for age-related diseases. Moreover, in the calorie-restricted individuals, no adverse effects (and some favorable ones) were found on quality of life, mood, sexual function, and sleep.
The calorie-restriction intervention did cause slight declines in bone density, lean body mass, and aerobic capacity (the ability of the body to use oxygen during exercise). However, these declines were generally no more than expected based on participants' weight loss. Other short-term studies have found that combining physical activity with calorie restriction protects against losses of bone, muscle mass, and aerobic capacity.
Some CALERIE participants also experienced brief episodes of anemia (diminished number of circulating red blood cells that carry oxygen through the body). Overall, these findings indicate that while the degree of calorie restriction in CALERIE is safe for normal-weight or moderately obese people, clinical monitoring is recommended.
Most research to date has focused on the weight-loss aspect of fasting, primarily in obese people, and only a few small clinical trials have been conducted. More work is needed to determine which, if any, types of fasting diets have long-term benefits.
Observational studies have been conducted in people who practice fasting in one form or another. In an observational study, the investigator does not determine the treatment to offer and does not randomize subjects into a control group or experimental group. Instead, the investigator records data from real-life situations.
For example, one observational study compared people who routinely fasted (as part of a religious practice or for another reason) to those who did not fast. It found that those who routinely fasted were less likely to have clogged arteries or coronary artery disease. However, the study did not control for other factors that could have affected the results, such as the kind of diet, quality of food consumed, or use of nutritional supplements.
After decades of research, scientists still don't know why calorie restriction extends lifespan and delays age-related diseases in laboratory animals. Do these results come from consuming fewer calories or eating within a certain timeframe? Are the results affected by the diet's mix of nutrients?
Several studies have focused on what occurs inside the body when caloric intake is restricted. In laboratory animals, calorie restriction affects many processes that have been proposed to regulate the rate of aging. These include inflammation, sugar metabolism, maintenance of protein structures, the capacity to provide energy for cellular processes, and modifications to DNA. Another process that is affected by calorie restriction is oxidative stress, which is the production of toxic byproducts of oxygen metabolism that can damage cells and tissues.
Several of these processes were similarly affected by calorie restriction in the human CALERIE trial. However, we do not yet know which factors are responsible for calorie restriction's effects on aging or whether other factors contribute.
Research supported by NIA has also focused on the effects of intermittent fasting. During fasting, the body uses up glucose and glycogen, then turns to energy reserves stored in fat. This stored energy is released in the form of chemicals called ketones. These chemicals help cellsespecially brain cellskeep working at full capacity. Some researchers think that because ketones are a more efficient energy source than glucose, they may protect against aging-related decline in the central nervous system that might cause dementia and other disorders.
Ketones also may inhibit the development of cancer because malignant cells cannot effectively obtain energy from ketones. In addition, studies show that ketones may help protect against inflammatory diseases such as arthritis. Ketones also reduce the level of insulin in the blood, which could protect against type 2 diabetes.
But too many ketones in the blood can have harmful health effects. This is one of the reasons researchers want to understand more about how calorie restriction diets work before recommending them.
Despite a lot of research on calorie restriction and fasting, there are no firm conclusions about the benefits for human health. Here's a summary of the reasons why:
Most calorie-restriction and fasting-diet studies have been in younger people, but researchers are beginning to study older adults. A clinical trial conducted by NIA is testing the 5:2 diet in obese people, age 55 to 70, with insulin resistance. (This is a condition in which cells do not respond normally to the hormone insulin. This can lead to serious diseases such as diabetes.) People in the experimental group can eat at will for 5 days, and then for 2 consecutive days are restricted to 500 to 600 calories per day. The experiment is designed to find out how 8 weeks of the 5:2 diet, compared to a regular diet, affects insulin resistance and the brain chemicals that play a role in Alzheimer's disease.
In the coming years, researchers will continue to explore many unresolved questions. What are the long-term benefits and risks of the various eating patterns? Which diets are feasible as a long-term practice? What specific biological effects on aging and disease are triggered by a particular eating pattern? If a specific way of eating is recommended, at what age is it best to start, and is it safe to continue as you get older?
Scientists are exploring many aspects of calorie restriction and fasting and their effects on people of all ages. Some are conducting clinical studies and trials to learn more. If you are interested in volunteering for this type of research, search ClinicalTrials.gov using keywords such as "intermittent fasting," "time-restricted feeding," or "calorie restriction."
There's insufficient evidence to recommend any type of calorie-restriction or fasting diet. A lot more needs to be learned about their effectiveness and safety, especially in older adults.
You may be tempted to try one of these eating patterns. It's important to make sure that whatever you try provides you with a safe level of nutrition. Talk with your healthcare provider about the benefits and risks before making any significant changes to your eating pattern.
Meanwhile, there's plenty of evidence for other actions you can take to stay healthy as you age:
Read more about healthy eating for older adults.
Excerpt from:
Calorie Restriction and Fasting Diets: What Do We Know?
Posted in Human Longevity
Comments Off on Calorie Restriction and Fasting Diets: What Do We Know?
Controversy King Aaron Rodgers Once Willingly F*cked Around With Everyone By Bragging About Owning Atlas Shrugged – The Sportsrush
Posted: at 6:31 am
Controversy King Aaron Rodgers Once Willingly F*cked Around With Everyone By Bragging About Owning Atlas Shrugged The Sportsrush
Continue reading here:
Posted in Atlas Shrugged
Comments Off on Controversy King Aaron Rodgers Once Willingly F*cked Around With Everyone By Bragging About Owning Atlas Shrugged – The Sportsrush
Talmud and Midrash | Definition, Books, Examples, & Facts
Posted: at 6:22 am
Talmud and Midrash, commentative and interpretative writings that hold a place in the Jewish religious tradition second only to the Bible (Old Testament).
The Hebrew term Talmud (study or learning) commonly refers to a compilation of ancient teachings regarded as sacred and normative by Jews from the time it was compiled until modern times and still so regarded by traditional religious Jews. In its broadest sense, the Talmud is a set of books consisting of the Mishna (repeated study), the Gemara (completion), and certain auxiliary materials. The Mishna is a collection of originally oral laws supplementing scriptural laws. The Gemara is a collection of commentaries on and elaborations of the Mishna, which in the Talmud is reproduced in juxtaposition to the Gemara. For present-day scholarship, however, Talmud in the precise sense refers only to the materials customarily called Gemaraan Aramaic term prevalent in medieval rabbinic literature that was used by the church censor to replace the term Talmud within the Talmudic discourse in the Basel edition of the Talmud, published 157881. This practice continued in all later editions.
The term Midrash (exposition or investigation; plural, Midrashim) is also used in two senses. On the one hand, it refers to a mode of biblical interpretation prominent in the Talmudic literature; on the other, it refers to a separate body of commentaries on Scripture using this interpretative mode.
Despite the central place of the Talmud in traditional Jewish life and thought, significant Jewish groups and individuals have opposed it vigorously. The Karaite sect in Babylonia, beginning in the 8th century, refuted the oral tradition and denounced the Talmud as a rabbinic fabrication. Medieval Jewish mystics declared the Talmud a mere shell covering the concealed meaning of the written Torah, and heretical messianic sects in the 17th and 18th centuries totally rejected it. The decisive blow to Talmudic authority came in the 18th and 19th centuries when the Haskala (the Jewish Enlightenment movement) and its aftermath, Reform Judaism, secularized Jewish life and, in doing so, shattered the Talmudic wall that had surrounded the Jews. Thereafter, modernized Jews usually rejected the Talmud as a medieval anachronism, denouncing it as legalistic, casuistic, devitalized, and unspiritual.
There is also a long-standing anti-Talmudic tradition among Christians. The Talmud was frequently attacked by the church, particularly during the Middle Ages, and accused of falsifying biblical meaning, thus preventing Jews from becoming Christians. The church held that the Talmud contained blasphemous remarks against Jesus and Christianity and that it preached moral and social bias toward non-Jews. On numerous occasions the Talmud was publicly burned, and permanent Talmudic censorship was established.
On the other hand, since the Renaissance there has been a positive response and great interest in rabbinic literature by eminent non-Jewish scholars, writers, and thinkers in the West. As a result, rabbinic ideas, images, and lore, embodied in the Talmud, have permeated Western thought and culture.
The Talmud is first and foremost a legal compilation. At the same time it contains materials that encompass virtually the entire scope of subject matter explored in antiquity. Included are topics as diverse as agriculture, architecture, astrology, astronomy, dream interpretation, ethics, fables, folklore, geography, history, legend, magic, mathematics, medicine, metaphysics, natural sciences, proverbs, theology, and theosophy.
This encyclopaedic array is presented in a unique dialectic style that faithfully reflects the spirit of free give-and-take prevalent in the Talmudic academies, where study was focussed upon a Talmudic text. All present participated in an effort to exhaust the meaning and ramifications of the text, debating and arguing together. The mention of a name, situation, or idea often led to the introduction of a story or legend that lightened the mood of a complex argument and carried discussion further.
This text-centred approach profoundly affected the thinking and literary style of the rabbis. Study became synonymous with active interpretation rather than with passive absorption. Thinking was stimulated by textual examination. Even original ideas were expressed in the form of textual interpretations.
The subject matter of the oral Torah is classified according to its content into Halakha and Haggada and according to its literary form into Midrash and Mishna. Halakha (law) deals with the legal, ritual, and doctrinal parts of Scripture, showing how the laws of the written Torah should be applied in life. Haggada (narrative) expounds on the nonlegal parts of Scripture, illustrating biblical narrative, supplementing its stories, and exploring its ideas. The term Midrash denotes the exegetical method by which the oral tradition interprets and elaborates scriptural text. It refers also to the large collections of Halakhic and Haggadic materials that take the form of a running commentary on the Bible and that were deduced from Scripture by this exegetical method. In short, it also refers to a body of writings. Mishna is the comprehensive compendium that presents the legal content of the oral tradition independently of scriptural text.
Midrash was initially a philological method of interpreting the literal meaning of biblical texts. In time it developed into a sophisticated interpretive system that reconciled apparent biblical contradictions, established the scriptural basis of new laws, and enriched biblical content with new meaning. Midrashic creativity reached its peak in the schools of Rabbi Ishmael and Akiba, where two different hermeneutic methods were applied. The first was primarily logically oriented, making inferences based upon similarity of content and analogy. The second rested largely upon textual scrutiny, assuming that words and letters that seem superfluous teach something not openly stated in the text.
The Talmud (i.e., the Gemara) quotes abundantly from all Midrashic collections and concurrently uses all rules employed by both the logical and textual schools; moreover, the Talmuds interpretation of Mishna is itself an adaptation of the Midrashic method. The Talmud treats the Mishna in the same way that Midrash treats Scripture. Contradictions are explained through reinterpretation. New problems are solved logically by analogy or textually by careful scrutiny of verbal superfluity.
The strong involvement with hermeneutic exegesisinterpretation according to systematic rules or principleshelped develop the analytic skill and inductive reasoning of the rabbis but inhibited the growth of independent abstract thinking. Bound to a text, they never attempted to formulate their ideas into the type of unified system characteristic of Greek philosophy. Unlike the philosophers, they approached the abstract only by way of the concrete. Events or texts stimulated them to form concepts. These concepts were not defined but, once brought to life, continued to grow and change meaning with usage and in different contexts. This process of conceptual development has been described by some as organic thinking. Others use this term in a wider sense, pointing out that, although rabbinic concepts are not hierarchically ordered, they have a pattern-like organic coherence. The meaning of each concept is dependent upon the total pattern of concepts, for the idea content of each grows richer as it interweaves with the others.
Read more from the original source:
Posted in Talmud
Comments Off on Talmud and Midrash | Definition, Books, Examples, & Facts