Daily Archives: June 20, 2023

Three people arrested and 400 cannabis plants seized at … – The Isle of Thanet News

Posted: June 20, 2023 at 8:41 pm

Cannabis plants found at a property in Broadstairs

Three people have been arrested and more than 400 cannabis plants subsequently seized at a Broadstairs property after police stopped a van on the Thanet Way on Friday (June 16).

Thanet Neighbourhood Beat officers were on patrol when they saw and stopped a van which was travelling near St Nicholas-at-Wade.

A report had been made to Kent Police earlier that morning suggesting the vehicle might be connected to drug-dealing activity in Broadstairs.

Equipment suspected to be used for the growing of cannabis was recovered from the van and a 28-year-old man from north London was arrested on suspicion of being concerned in supplying a controlled drug.

A further search at a property in Salisbury Avenue in Broadstairs uncovered a large cannabis cultivation which was spread across four rooms. Around 400 plants were seized by officers at the address.

A further two men, aged 26 and 25, were arrested at the scene on suspicion of supplying a controlled drug and cultivating cannabis.

All three men have been released on bail pending further enquiries.

See original here:

Three people arrested and 400 cannabis plants seized at ... - The Isle of Thanet News

Posted in Cbd Oil | Comments Off on Three people arrested and 400 cannabis plants seized at … – The Isle of Thanet News

Maya Jama in Bathing Suit Shares a Special Selfie – Celebwell

Posted: at 8:41 pm

Maya Jama is pretty in pink in her swimsuit. The Brit beauty and Love Island host shows off her beautiful body in a pink bathing suit in one of her latest social media posts, a mirror selfie shared to her Instagram Stories. How does the 28-year-old keep herself so fit? Read on to see 7 of Maya Jama's top diet and fitness tips for staying in shape and the photos that prove they work.

Maya prioritizes exercise, getting up early to get her sweat on. "I try to wake up early so I can fit in a workout before arriving on set. It helps get me in the zone," she told Women's Health. She also works out regularly. "I now work out three to four days a week because I know it's an important part of keeping healthy," she added.

Maya has a "quality over quantity is key" approach to exercise, explaining "it's not how long you train for, but how you train that counts." She does mostly high-intensity interval training (HIIT) workouts. An example of her "vigorous summer workout plan" consists of three sets of burpees, jumping squats and planks (non-stop for 20 seconds then 40 seconds rest), followed by seated shoulder presses (three sets of ten reps), squats (three sets of ten reps), crunches (three rounds of 20 seconds non-stop plus a 40 second rest) and press-ups (three sets of ten reps).

"Although, I prefer weight training on the machines and I still do some cardio. I usually ease myself in with a run or fast walk on the treadmill," she said.

Workout buddies keep Maya accountable. "In the first lockdown, I lived with friends and we all worked out together. It was good because when one person wasn't feeling it, the others would get you in the mood," she told Stylist.co.

Maya also helps her body to recover. One of her go-to practices? She takes baths infused with CBD. "When I have something really important on the next day I have a really hot bath before bed with CBD oil in the bath so it's really steamy," Maya told Get the Gloss.

Maya also loves boxing. "Once a week, I do a one-hour boxing workout with my trainer Bradley Simmonds, who keeps me motivated; if I'm on my own, I'll pump up high-energy songs like Bicycle by Vybz Kartel to get me through my workout," she told Women's Health. df44d9eab23ea271ddde7545ae2c09ec

Originally posted here:

Maya Jama in Bathing Suit Shares a Special Selfie - Celebwell

Posted in Cbd Oil | Comments Off on Maya Jama in Bathing Suit Shares a Special Selfie – Celebwell

ABSTRAX Releases White Paper That Details Its Role in Keeping … – PR Web

Posted: at 8:41 pm

Abstrax Helps BHO Extraction Remain Legal in Canada

TUSTIN, Calif. (PRWEB) June 20, 2023

ABSTRAX, an industry leader in the study and production of cannabis and botanically-derived terpenes, has officially released a white paper detailing the companys role in 2017 in preventing the prohibition of hydrocarbon extraction (including BHO extraction) in Canada.

To accomplish this, Kevin Koby, Chief Science Officer (CSO) and Co-Founder of ABSTRAX, and his team, alongside industry partners ETS and Hollistek, presented a compelling case that emphasizes the necessity for regulatory oversight while addressing concerns regarding illicit production and the safety of producers and consumers.

The BHO (Butane Hash Oil) extraction method has been often targeted by legislators as being dangerous due to risks associated with handling flammable and volatile substances such as butane gas. Without proper oversight and training, extractors can be exposed to dangerous contaminants within poorly ventilated rooms and be injured due to uncontrolled pressure during the process.

While there are inherent dangers involved, BHO will never go away, said Koby. In fact, if made illegal extractors will continue to use the method and skirt safety practices. Knowing this, we dove into meticulous research and evidence-based arguments to help the legislature understand the reality of the situation, while driving home the plethora of health and wellness benefits that come with the process.

Hydrocarbon extraction allows producers to capture and preserve the full spectrum of cannabinoids and terpenes found in cannabis, resulting in extracts that offer enhanced potency and flavor profiles. Health benefits of concentrated cannabis extracts are used for pain management, anxiety and stress relief, appetite stimulation and are even thought to have important neuroprotective properties.

To chronicle this important milestone in ABSTRAXs history and the research that led Canadian legislators to choose regulation over banning BHO extraction, ABSTRAX has released a whitepaper that can be accessed at the following link: https://abstraxtech.com/pages/terpene-research.

ABSTRAX remains committed to driving innovation and advancing the cannabis industry through its relentless pursuit of excellence in extraction, formulation, and research, said Koby. By taking the lead in educating the Canadian government and advocating for the continued legality and regulation of hydrocarbon extraction, were showcasing our dedication to fostering a safe, transparent, and thriving cannabis market.

About ABSTRAX Leveraging its proven background in cannabis research, ABSTRAX is the leader in the research, development, and production of botanically-derived and cannabis-inspired terpenes that create unforgettable sensorial experiences. Headquartered in California, the company owns and operates a state-of-the-art type 7 licensed research and manufacturing lab where its award-winning product developers and scientists leverage the most advanced strain analysis technology to extract and study aroma compounds via three-dimensional analysis, allowing for each and every compound within a plant to be named and studied. The company has partnered with many of the best cultivators in the industry to study their cannabis profiles and create the world's most advanced terpene formulations. As a result of its efforts, ABSTRAX offers the largest terpene catalog of the most popular strains - botanically-derived terpene blends and isolates native to cannabis. These ingredients, also known as functional flavors and aromas, are used in vapes, concentrates, edibles, beer, essential oils, fragrances, cosmetics, topicals, tinctures, alcohol, food and beverage, personal care, and more. The company works with internationally recognized brands and provides unparalleled cannabis research, innovation, and custom formulations to create products that engage and thrill consumers. ABSTRAX also devotes significant resources to developing the highest terpene standards and best practices in the industry. The company has developed a robust quality management system including Gas Chromatography analysis and molecular distillation of natural ingredients to achieve the highest purity standards. Investigating and ensuring that ingredients used in its own products, and products within its industry, are safe for consumption. The terpene industry is a rapidly growing segment of the global flavor and fragrance market, which is expected to grow to $35 billion by 2024. This market segment includes the cannabis, CBD, skincare, cosmetics, health and wellness, food and beverage industries. For more information, visit AbstraxTech.com.

Share article on social media or email:

Read more:

ABSTRAX Releases White Paper That Details Its Role in Keeping ... - PR Web

Posted in Cbd Oil | Comments Off on ABSTRAX Releases White Paper That Details Its Role in Keeping … – PR Web

Biden meets with AI leaders to discuss its ‘enormous promise and its … – KULR-TV

Posted: at 8:40 pm

(The Center Square) President Joe Biden held an event in California Tuesday to discuss the future of artificial intelligence and what regulations may be enacted to rein it in.

Biden hosted the meeting with federal officials, AI experts and governors to discuss AIs enormous promise and its risks.

As Ive said before, we will see more technological change in the next 10 years than weve seen in the last 50 years, and maybe even beyond that, Biden told reporters at the event at a San Francisco hotel.

AI is already driving that change in every part of American life, often in ways that we dont notice, Biden said. AI is already making it easier to search the internet, helping us drive to our destinations while avoiding traffic in real time.

Biden gave a nod to the risks to our society, our economy and our national security. In October of last year, Biden released an AI Bill of Rights. He also signed an executive order earlier this year to fight bias in the design of AI.

While advanced AI has the ability to operate independently from its designers once it is set up, those designers can build certain biases or political slants into how the AI processes information and responds to requests.

Biden pointed to this as an opportunity for spreading misinformation. After giving his remarks, he asked media to leave the room for the official meeting.

Biden held the event after the release of ChatGPT, a new technology where users can interact with artificial intelligence in a more significant way. The technology was considered a major breakthrough for AI and spread quickly in popularity in part because of its ability to apparently think creatively and do things like write entire elaborate poems in just seconds.

The breakthrough has resurfaced concerns that AI could be used for an array of harmful purposes, ranging from malicious use from foreign powers or companies to indirect consequences like lost jobs. Experts say AI could also begin acting in interests contrary to its creators and humans in general, even without its creators being aware of it.

Billionaire Elon Musk, who helped found the company that later created ChatGPT after his exit, has called for a pause in the development of AI until regulations are enacted. He echoed that sentiment during a Twitter Spaces event as part of a Viva Technology conference last week.

We could have a potentially catastrophic outcome," Musk said, adding that while AIs impact will likely be positive, we need to minimize the possibility that something could go wrong with digital superintelligence.

Last month, Biden met with Alphabet, Anthropic, Microsoft, and OpenAI, the company that developed ChatGPT. The White House said in a statement that the meeting was to underscore this responsibility and emphasize the importance of driving responsible, trustworthy, and ethical innovation with safeguards that mitigate risks and potential harms to individuals and our society.

Last month's meeting coincided with the White Houses announcement of $140 million in AI research and development funding to be made available through the National Science Foundation.

View post:

Biden meets with AI leaders to discuss its 'enormous promise and its ... - KULR-TV

Posted in Superintelligence | Comments Off on Biden meets with AI leaders to discuss its ‘enormous promise and its … – KULR-TV

VivaTech: The Secret of Elon Musk’s Success? ‘Crystal Meth’ – The New Stack

Posted: at 8:40 pm

Elon Musk was soaking up the adulation Friday at Paris Viva Technology, Europes largest startup and tech conference. In a four-day confab that began Wednesday, VivaTech has featured French President Emmanuel Macron, Salesforce CEO Marc Benioff, and Yann LeCun, a Meta scientist and Turing Award winner.

But Musk was clearly the star invitee, with the keen interest in his appearance necessitating a move from a smaller venue to the 4,000-seat Dme de Paris, most frequently used for musicals. (His mother, Maye Musk, was in the audience.)

To hoots of approval from the audience, a fawning Maurice Lvy, chairman of the Publicis Group, an advertising company, invited Musk to sing and dance, if he wanted. Lvy then said, Your name is a brand. Its a brand for innovation, for ambition

For perfume, Musk interrupted.

Levy continued, You have been always proven right.

Not always, Musk chuckled.

Then Lvy asked his first question: Will you still be right with Twitter?

Sure, it was expensive, Musk answered, to audience laughter. (The CEO of Tesla and SpaceX paid a reported $44 billion for the social media outlet. In May, the asset manager Fidelity marked down its equity stake in the company, placing the overall value of the X Holdings Corp., Twitters parent company, at roughly $15 billion.)

Listen, if Im so smart, why did I pay so much for Twitter? A question he never answered during the hourlong conversation.

Some subjects he did address during the interview which included questions from representatives from large French corporations (LOral, Orange, LVMH), and, in an impromptu and chaotic session at the end, from the audience follow. (Quotes have been edited for length and clarity.)

What drives him: Crystal meth is the answer. If you think Red Bull gives you wings. Just kidding, for the record.

The companies still have a lot to do for their core mission. For electric vehicles, sustainable energy, its still, less than 1% of the global fleet is electric. So youve got about 2 billion cars and trucks on the road, but still less than 20 million are electric at this point. So this is a long way to go for sustainable energy, for sustainable energy generation.

[For] the Tesla mission, I think were weve made a lot of progress, but still its a lot more ahead. Then SpaceX, the goal is a big goal, but we want to try to make life multi-planetary, to extend life beyond Earth. And I think this is important for a number of reasons.

The light of consciousness: It appears that we might just be the only consciousness, at least in this galaxy. And if so, thats kind of a scary prospect, because it means that the light of consciousness is like a tiny candle in a vast darkness. And we should do everything we can to prevent that candle from going out. [applause from the audience] So that means obviously taking the actions to ensure that Earth is good, that Earth is safe and secure for civilization.

Growing up and The Hitchhikers Guide to the Galaxy: The thing that was maybe most significant from a philosophical standpoint was that when I was about maybe 12 or 13, I had somewhat of an existential crisis where I was like, What is the meaning of life? Is life just meaningless? Why are we here? What does it all mean?

And I read a lot of books on religion and philosophy and then ultimately, I read this book Hitchhikers Guide to the Galaxy, which is great. That book is really a philosophy book thats disguised as humor. And the point that [author] Douglas Adams makes is that the real difficulty is understanding what questions to ask about the answer that is the universe.

What he learned from Douglas Adams: Its essentially a philosophy of curiosity, of saying, What can we do to find out more about the nature of the universe and the meaning of life? And so thats the foundational element. And then from there you say, OK, well, if we want to find out the meaning of life, we have to expand the scope and scale of consciousness. We have to go out there, and we can explore the stars to know what questions to ask about the universe and understand the universe.

Its from that sort of core philosophy that these companies arise in most cases. You might say, How does Twitter help with that?

His prediction of Teslas failure: There was a need for Tesla because at the time of starting Tesla, there were no electric vehicles being made, and the big car companies were not making electric vehicles. There were no startups that we were aware of making electric vehicles. So its like, well, we should try.

And in the case of both Tesla and SpaceX, I thought the chance of success was maybe 10%. So its not like I thought it would be successful. I thought it would fail.

The risk of AI: I think theres a real danger for digital superintelligence having negative consequences. And so if we are not careful with creating artificial general intelligence, we could have potentially a catastrophic outcome. I think theres a range of possibilities.

The most likely outcome is positive for AI, but thats not every possible outcome. So we need to minimize the probability that something will go wrong with digital superintelligence. So Im in favor of AI regulation because I think that AI is a risk to the public.

Changes at Twitter: I think that most people would say that their experience has improved. Weve gotten rid of 90% of the bots and the scams. Weve gotten rid of, I think, 95% of the child exploitation material that was on Twitter which was a shock to see, but the amount of that was really terrible. Some of that had been going on for 10 years with no action.

We have open sourced the algorithm, so were trying to be as transparent as possible. So Twitter is the only social media company where you can see the actual code of the algorithm. So its not like some secret black box. [audience applause] The way to build trust is not, Take my word for it. Its, Lets show you exactly how it works and full transparency.

Are Twitter advertisers back?: Maybe with a few exceptions, almost all the advertisers have either come back or they said they will come back.

Twitter, a positive force in the universe: The overarching goal is to have Twitter be a force, a positive force for civilization. And, so if youre on the platform and youre being harassed or bullied or whatever, obviously thats a negative experience.

What were doing is what we call freedom of speech, but not freedom of reach. Yes, you can say offensive things, but then your content is going to get downgraded. [chuckled] So if youre a jerk, your reach will drop. [chuckled] So, yeah, I think thats the right thing. [chuckled, audience applause]

In response to a childs question about Neuralink, Musks company that is developing implantable brain-computer interfaces: First of all I want to assure everyone who may be worried about Neuralink, Neuralink is going to be a fairly slow process because anything thats done in humans, its very slow. So sometimes people think that suddenly were going to be ripping open ones head and then before you know it, everyones connected to the internet, and then were in trouble.

Hopefully later this year well do our first human device implantation. And this will be for someone [who is] tetraplegic, quadriplegic, [who] has lost the connection from their brain to their body. And we think that person will be able to communicate as fast as someone who has a fully functional body. So thats going to be a big deal.

Read the original here:

VivaTech: The Secret of Elon Musk's Success? 'Crystal Meth' - The New Stack

Posted in Superintelligence | Comments Off on VivaTech: The Secret of Elon Musk’s Success? ‘Crystal Meth’ – The New Stack

AI meets the other AI – POLITICO – POLITICO

Posted: at 8:40 pm

With help from Derek Robertson

A sign directs travelers to the start of the "1947 UFO Crash Site Tours" in Roswell, N.M., on June 10, 1997. | Eric Draper/AP Photo

If the explosion of artificial intelligence werent mind-boggling enough, Washington is now confronting the possibility of another, weirder, AI: alien intelligence.

After a former intelligence official went public earlier this month with claims that he was told by other officials of a secret government program that possesses downed alien spacecraft, the House Oversight Committee has announced plans for a hearing on the matter.

And former deputy assistant secretary of defense for intelligence Christopher Mellon, now with Harvards alien-focused Galileo Project, wrote in POLITICO Magazine that he has referred four people to the Pentagons UFO office who say they have knowledge of secret government efforts to study off-world craft.

The Pentagon has said its UFO program has not discovered any verifiable information to substantiate claims about downed craft, and many stories about aliens and UFOs have been shown to result from some combination of delusion, confusion and disinformation.

But here at DFD, we like to keep an open mind.

After all, magic internet money, killer robots and AI itself were all the stuff of futuristic sci-fi before they became political hot potatoes in the present.

And it turns out that AI, in particular, has a thing or two to teach us about the possible existence of its extraterrestrial cousin.

To help us wrap our heads around this, we caught up with Ravi Starzl, an AI-focused computer science professor at Carnegie Mellon. Starzl is also an adviser to Americans for Safe Aerospace, an advocacy group founded by former Navy fighter pilot Ryan Graves, who has been calling attention to the UFO phenomenon since reporting a series of sightings that took place in 2014 and 2015 to Congress and the Pentagon.

At a practical level, how can AI help with identifying UFOs?

Ive been helping some organizations develop machine learning algorithms and systems for being able to identify and characterize unknown aerial phenomena based on multimodal domains of data. So visual, textual, audio, radar.

Machine learning, and AI in particular, will be able to process the vast quantities of information that exist, make sense of it, and actually turn it into interpretable insights and even actionable information

You need to be able to separate hoaxes and fakes from genuine phenomena and machine learning is extremely useful for that.

At a more abstract level, Ryan Graves has argued that the process occurring right now, in which human societies are grappling with the rise of AI, will prepare them to grapple with the possible existence of alien intelligence. What do you think?

Its dead on.

A real value in the current craze is its forcing people to start to think about the fact that we are not the only cognitive entities operating in our world anymore. Theyre still not that sophisticated compared to where the fundamentals of that technology can take it. But we can still have a conversation with it right now and it can do work for us and it can give us ideas we didnt have.

That process of learning how to interact with a fundamentally alien, if you will, intelligence is going to open the whole zeitgeist up.

It sounds like an exciting time to be studying intelligence.

Were going to be very busy and living in very interesting times for the next 20 years as these things start to merge, diverge, and get analyzed and brought more into the mainstream.

When you say these things Do you mean human-created AI, Or are you also talking about possible alien intelligence?

I guess in my mind, Im having a hard time seeing the difference.

So, at a certain level, Its all just different forms of intelligence?

This is a question that has been wrestled with, What does it mean to have the other?

At some level, two humans are alien intelligences. Because one, they each have their own cognitive sphere. They each have their own mental models of reality. And they have to exchange information in order to collaborate.

That same phenomenon, like a matryoshka doll, just continues outward when youre dealing with super-organisms like societies.

And then from there you have formations interacting with other formations at the superintelligence level. So in many respects the question of, Is there alien intelligence and how would we deal with it? has already been answered definitively. Yes, because its already with us.

But now the question becomes how exotic, what processes created it, and how do we establish a more efficient or more consistent or coherent or safe way of interacting with it and understanding it and learning from it?

A message from American Edge Project:

How American Values Can Keep The Global Internet Free, Open And Accessible Since the early days of the internet, the United States has led the world in advocating to keep it free and open. America has championed the values of free expression and open trade, of participatory governance, and of technological advancement that promotes freedom, opportunity, and equality. Read our policy report here.

POLITICO illustration/Photos by Wikimedia Commons, iStock

Sometimes in order to steer the future, you need to learn a little bit from the past.

Writing in POLITICO Magazine, this weekend, Vanderbilt University professor Ganesh Sitaraman proposes that lawmakers wrangling with how to regulate young Americas favorite Chinese-owned app TikTok reflect on some pre-World War II American history.

[D]ebates over foreign ownership of the means of communication is part of an important history and tradition in American law, Sitaraman writes, arguing that lawmakers should take a platform-utilities approach to TikTok that would ensure American influence over its governance.

If lawmakers want to take a lesson from the long American tradition of regulated capitalism, they should advance comprehensive legislation to regulate tech platforms more like public utilities, Sitaraman writes. Such legislation should include restrictions on foreign ownership and control, which could apply to all tech platforms from adversarial countries. Comprehensive legislation should also include sectoral standards that apply to U.S. firms as well standards not just on data collection, surveillance and privacy, but also against anti-competitive behavior, all tech policy topics that have relevance far beyond just TikTok itself. Derek Robertson

A message from American Edge Project:

The Apple Vision Pro headset is displayed in a showroom on the Apple campus Monday, June 5, 2023, in Cupertino, Calif. (AP Photo/Jeff Chiu) | AP

If Apples Reality Pro headset does turn out to be the future-defining device that finally gets virtual and augmented reality into American homes, we might not begin to see the effects for a long while.

Thats what tech analyst Benedict Evans predicts in a new essay, comparing it to the iPhone and writing We know today that the iPhone worked, but Apple still had to change the business model, expand distribution and build a lot more product. Sales didnt really take off for five years and the launch was pretty soft. [I]t seems unlikely that this will be as big as the iPhone in the next few years, and more likely even then that it will look more like the iPad which is a pretty good business.

Indeed, he writes that maybe the most revealing thing about the Reality Pro launch so far is in what it says about just how much raw capital Apple has to expend on experimenting with such devices. He notes that Apple had $280 billion in free cash flow over just the past three years to play with, helping to power the silicon and manufacturing mastery that made the Reality Pro possible and which will pose a formidable challenge to the likes of Meta in their new competition. Derek Robertson

Stay in touch with the whole team: Ben Schreckinger ([emailprotected]); Derek Robertson ([emailprotected]); Mohar Chatterjee ([emailprotected]); and Steve Heuser ([emailprotected]). Follow us @DigitalFuture on Twitter.

If youve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

A message from American Edge Project:

Three Pillars For A Free And Open Internet

We need to craft a policy agenda rooted in three pillars: combatting digital authoritarianism, promoting free speech within and across borders, and building a stronger internet to connect people to each other and to their governments.

Learn more.

Originally posted here:

AI meets the other AI - POLITICO - POLITICO

Posted in Superintelligence | Comments Off on AI meets the other AI – POLITICO – POLITICO

Squid Game trailer for real-life reality contest prompts confusion from Netflix users – Yahoo News

Posted: at 8:40 pm

TV viewers are accusing Netflix of missing the point after teasing the real-life Squid Game series.

In 2022, Netflix announced plans to capitalise on the success of the Korean series, which follows desperate members of the public taking part in a deadly competition for a huge cash prize.

This year, contestants who applied for the thankfully non-fatal version of the shows contest, titled Squid Game: The Challenge, filmed scenes for the forthcoming reality series, which has a prize of 3.7m. This is the largest lumpsum jackpot in the history of reality TV.

A new trailer for the series shows green tracksuited participants emulating an early scene in Squid Game that sees characters play Red Light, Green Light in front of a giant machine that guns them down should it catch them moving.

Squid Game was a hit with viewers and critics, who deemed it a biting satire damning capitalism in all its forms. With this in mind, many are expressing the belief that Netflix has overlooked the message that creator Hwang Dong-hyuk was trying to get across.

Its actually impressive how Netlfix completely missed the f***Ing point of Squid Game, one person wrote, adding: Its not like the show was subtle about it.

Another added: Do you know how haunting it is to see Netflix see a show that was a critique on capitalism do well and then creat a reality show mirroring the game about people killing each other to get out of debt.

Making a real Squid Game series is the equivalent of inventing Skynet from the Terminator films, one tweeter wrote, addressing the Terminator franchises villainous artificial general superintelligence system.

An additional TV viewer stated: Great job for ignoring the entire message of Squid Game.

The game show was thrown into controversy earlier this year when contestants criticised their experience.

According to some people who took part, entrants spent several hours in freezing temperatures of -3C while having to stand still for the Red Light, Green Light game.

Story continues

One player told The Sun: Even if hypothermia kicked in, then people were willing to stay for as long as possible because a lot of money was on the line. Too many were determined not to move so they stood there for far too long.

There were people arriving thinking they were going to be millionaires but they left in tears.

They added: It was like a warzone. People were getting carried out by medics but we couldnt say anything. If you talk then youre out.

The Independent contacted Netflix for comment.

Netflix event Tudum, which revealed fresh details about forthcoming projects, also revealed the cast for season two of Squid Game, which will be released in 2024.

Lee Jung-jae, Lee Byung-hun, Wi Ha-jun and Gong Yoo will all return in new episodes. New cast members include Im Siwan, Kang Ha-neul, Park Sung-hoon, and Yang Dong-geun

Squid Game: The Challenge will be released in November.

More here:

Squid Game trailer for real-life reality contest prompts confusion from Netflix users - Yahoo News

Posted in Superintelligence | Comments Off on Squid Game trailer for real-life reality contest prompts confusion from Netflix users – Yahoo News

Our Future Inside The Fifth Column- Or, What Chatbots Are Really For – Tech Policy Press

Posted: at 8:40 pm

Emily Tucker is the Executive Director at the Center on Privacy & Technology at Georgetown Law, where she is also an adjunct professor of law.

Illustrations drawn from Le mcanisme de la parole, suivi de la description dune machine parlante (The mechanism of speech, followed by the description of a talking machine), Wolfgang von Kempelen, 1791. Source

If you were a tech company executive, why might you want to build an algorithm capable of duping people into interacting with it as though it were human?

This is perhaps the most fundamental question one would hope journalists covering the roll-out of a technologyacknowledged by its purveyors to be dangerousto ask. But it is a question that is almost entirely missing amidst the recent hype over what internet beat writers have giddily dubbed the chatbot arms race.

In place of rudimentary corporate accountability reporting are a multitude of hot takes on whether chatbots are yet approaching the Hollywood dream of a computer superintelligence, industry gossip about panic-mode at companies with underperforming chatbots, and transcripts of chatbot conversations presented uncritically in the same amused/bemused way one might share an uncanny fortune cookie message at the end of a heady dinner. All of this coverage quotes recklessly from the executives and venture capitalists themselves, who issue vague, grandiose prophecies of the doom that threatens us as a result of the products they are building. Remarkably little thought is given to how such apocalyptic pronouncements might benefit the makers and purveyors of these technologies.

When the Future of Life Institute published an open letter calling for a pause on the training of AI systems more powerful that ChatGPT4, none of the major news outlets that covered the letter even pointed out that the Future of Life Institute is funded almost entirely by Elon Musk, who is also a cofounder of OpenAI, which developed the GPT-4, the very technological landmark past which the open letter says nobody else should, for now, aspire. Before getting caught up in speculation about what these technologies portend for the future of humanity, we need to ask what benefits the corporate entities behind them expect to derive from their dissemination.

Much of the supposedly independent reporting about chatbots, and the technology behind them, fails to muster a critique of the corporations building chatbots any more hard-hitting than the one the chatbots themselves can generate. Take for example the fawning New York Times profile of Sam Altman which, after describing his house in San Francisco and his cattle ranch in Napa, opines that Altman is not necessarily motivated by money. The reporters take on Altmans motivations is unaffected by Altmans boast that OpenAI will capture much of the worlds wealth through the creation of A.G.I. When Altman claims that after he extracts trillions of dollars in wealth from the people, he is planning on redistributing it to the people, the article makes nothing of the fact that Altmans plans for redistribution are entirely undefined, or of Altmans caveat that money may mean something different (presumably something that would make redistribution unnecessary) once A.G.I is achieved. The reporter mentions that Altman has essentially no scientific training and that his greatest talent is talk(ing) people into things. He nevertheless treats Altmans account of his product as a serious assessment of its intellectual content, rather than as a marketing pitch.

If the profit motive behind the chatbot fad is not interesting to most reporters, it should be to digital consumers (i.e., everybody), from whom the data necessary to run chatbots is mined, and upon whom the profit-making plan behind chatbots is being practiced. In order to understand what chatbots are really for, it is necessary to understand what the companies that are building them want to use them for. In other words, what is it about chatbots in particular that makes them look like goldmines or, perhaps more aptly, gold miners, to companies like OpenAI, Microsoft, Google and Meta?

Since the private actors who sell the digital infrastructure that now defines much of contemporary life are generally not required to tell the public anything about how their products work or what their purpose is, we are forced to make some educated guesses. There are at least three obvious wealth extraction strategies served by chatbots, and far from being innovative, they represent some of the most traditional moves in the capitalist playbook: (1) revenue generation through advertising; (2) corporate growth through monopoly; (3) preemption of government restraint through amassed political power.

Marketing is the corporate activity for which chatbots are most transparently and most immediately useful. Many of the companies building chatbots make most of their money from advertising, or sell their products to companies who make their money from advertising. Why might it be better for companies that make money through advertising if I use a chatbot to look for something online instead of some other type of search engine? The answer is evident from a glance at the many chatbot conversations now smothering the internet. When people interact with traditional search interfaces, they feed the algorithm fragments of information; when people interact with a chatbot they often feed the algorithm personal narratives. This is important not because the algorithm can distinguish between fragments of information and meaningful narratives, but because when human beings tell stories, they use information in ways that are rich, layered, and contextual.

Tech companies market this capacity of chatbots for more textured interaction as a means towards more perfectly individualized search results. If you tell the chatbot not only that you want to buy a hammer, but why you want to buy it, the chatbot will return more relevant recommendations. But if you are Google, the real profits flow not from the relevant information the chatbot provides the searcher, but from the extraneous information the searcher provides to the chatbot. If a chatbot is engaging enough, I may come away with a great hammer, but Google may come away with an entire story about the vintage chair that was damaged in my recent move to an apartment in a new city, during which I lost several things including my toolbox. It should be obvious how the details of this story are exponentially more monetizable than my one-off search for a hammer, both because of the opportunities to successfully market a wide range of services and products to me specifically, and because of the larger scale strategies that corporations can build using my information to make projections about what people like me will buy, consume, participate in, or pay attention to.

Its crucial for scaling up data collection that chatbots, unlike other kinds of digital prompting mechanisms, are fun to play with. Its not only that the urge to play will likely provoke more engagement than the urge to shop, but that when we play we are more open, more vulnerable, more flexible, and more creative. It is when we inhabit those qualities that we are most willing to share, and most susceptible to suggestion. All it took for one New York Times columnist to share information about how much he loves his wife, to relate what they did for Valentines Day, and to continue engaging with a chatbot, instead of his wife for hours on Valentines Day, was for the chatbot to tell the reporter it was in love with him. At no point in his column about this exchange did the columnist reflect on the possibility that professions of love (or of desire to become human, or of desire to do evil things) might be among the more statistically reliable ways to keep a person talking to a chatbot.

Such failure to reflect is no doubt one of the outcomes for which the companies building chatbots are optimizing their algorithms. The more human-ish the algorithm appears, the less we will think about the algorithm. The fewer thoughts we have that are about the algorithm, the more power the algorithm has to direct, or displace, our thoughts. That significant corporate attention is going towards ensuring the algorithm will produce a certain impression of the chatbot in the human user is evident from many of the chatbot transcripts, where the chatbot seems gravitationally compelled toward language about trust. Do you believe me? Do you like me? Do you trust me? spits out Microsofts chatbot, over and over in the course of one exchange.

We must not make the mistake of dismissing those prompts as embarrassing chatbot flotsam. The very appearance of desperation, neediness, or even ill-will, helps create an illusion that the chatbot possesses agency. The chatbots apparent personality disorders create a powerful illusion of personhood. The point of having the chatbot ask a question like do you trust me? is not actually to find out whether you do or dont trust the chatbot in that moment, but to persuade you through the asking of the question to treat the chatbot as the kind of thing that could be trusted. Once we accept chatbots as intelligent agents, we are already sufficiently manipulable, such that the question of their trustworthiness becomes a comparatively minor technical issue. Of course neither the chatbot, nor Microsoft, actually cares about your trust. What Microsoft cares about is your credulity and (to the extent necessary for your credulity) your comfort; what the chatbot cares about is.nothing.

This is where the value of chatbots as a tool for large scale, long term, accumulation of power and capital by the already rich and powerful comes into focus. To make sense of all of the evidence together the extent of the corporate investment, the snake oil flavor of the cultural hype, and resemblance of first generation chatbots to sociopaths who have recently failed out of people-pleasing bootcamp we need an explanation that dreams of private surplus far beyond what advertising alone can produce. As Bill Gates can tell you, the big money isnt in selling stuff to industry, but in controlling industry itself. How will trustworthy chatbots help the next generation of billionaires take things over, and which things?

Over at his blog, Bill Gates himself has some thoughts on that. What is powering things like ChatGPT, he reminds us, is artificial intelligence. After briefly offering a farcically broad definition of the term artificial intelligenceone that would include a map from my kitchen to my bathroomhe gets straight to the issue that he really cares about, how sophisticated AI will transform the marketplace. The development of AI will change the way people work, learn, travel, get health care, and communicate with each other. Entire industries will reorient around it. Businesses will distinguish themselves by how well they use it. In trying to convey to the reader the scale and significance of this coming industrial reorganization, Gates uses the word revolution no fewer than six times. He connects the revolution he says is being heralded by chatbots to the original personal computing revolution for which he himself claims credit. His use of the term revolution should raise serious alarm for anyone who for any reason cares about fair markets, considering that Gatess own innovations have had little to do with technology, and everything to do with manipulating corporate and economic structures to become the worlds most successful monopolist.

Notice how broad the categories of industry on Gatess list are: education, healthcare, communication, labor, transportation this includes almost every area of social and commercial human endeavor, and implicates nearly every institution most necessary for our individual and collective survival. Gates fills out the picture of what it might look like for businesses to distinguish themselves in the near future when success means owning the algorithms that capture each sector within entire industries in the context of education and healthcare specifically. For example, he promises that AI-powered ultrasound machines that can be used with minimal training will make healthcare workers more efficient, and imagines how one day, instead of talking to a doctor or a nurse, sick people will be able to ask chatbots whether they need medical care at all. He acknowledges that some teachers are worried about chatbots interfering with learning, but assures us he knows of other teachers who are allowing students to complete writing assignments by accessorizing drafts generated by chatbots with some personal flair, and are then themselves using chatbots to produce feedback on each students chatbot essay. How meta, as the kids used to say, before the total corporate poisoning of that once lovely bit of millennial slang.

There are so many crimes and tragedies in this vision of the future, but what demands our most urgent focus is the question of what it would mean for the possibility of democratic self-governance if the industries most vital to the public interest became wholly dependent on corporate-owned algorithms built with data drawn from mass surveillance. If the healthcare industry, for example, replaces a large proportion of the people who run its bureaucracy with algorithms, and the people who handle most patient interactions with chatbots, the problem is not only that healthcare workers will lose their jobs to machines and people will lose access to healthcare workers. The bigger concern is that as algorithms take over more and more of the running of the healthcare system, there will be fewer and fewer people who even know how to do the things that the algorithms are doing, and the system will fall in greater and greater thrall to the corporations that build and own and sell the algorithms. The healthcare industry in the U.S., like so many other industries on Gatess list, is already arranged as a conglomerate of de facto monopolies, so the business strategy to superimpose a tech monopoly on top of the existing structures is quite straightforward. Nobody needs to go door to door selling their wares to actual medical practitioners. The transaction can happen in the ether, between billionaires.

If tech companies have their way, they will divide the most lucrative industries up into a series of fiefdoms one corporation will wield algorithmic control over schools, another over transit, another over the media, etc. Competition, to the extent that it exists at all, will involve regular minor battles over which fiefdom gets to annex an unclaimed corner of the industry landscape, and the occasional major battle over general control of a specific fiefdom. If you find yourself feeling skeptical of the idea that the corporations that currently control industries, or sectors of industries, would capitulate to the tech companies in this way, consider the temptations. Algorithms dont need to be paid benefits or given breaks and days off. Chatbots cant organize for better working conditions, or sue for labor law violations, or talk about their bosses to the press.

Once a given tech company has captured a given sector, rendering it unable to function without the companys suite of proprietary algorithmic products, there is little anyone outside of that company will be able to do to change how the sector operates, and little anyone in the sector will be able do to change how the company operates. If the company wants to update the algorithm in a way that for any number of reasons might be bad for the end user, they wont even have to tell anyone they are doing it. If people think the costs of receiving services in a given sector are too high, and even if the people delivering those services think so too, there arent many levers they will be able to pull to get the tech companies to cooperate with a price change. It is important to recognize how quaint the monopolistic activities of the 20th century look in the face of this possibility. The goal is no longer to dominate crucial industries, but to convert crucial industries into owned intellectual property.

The federal government could in theory pass some laws and regulations, or even enforce some existing laws and regulations, to stop corporations from using data-fat algorithms to colonize industry. But if past is prologue (and the White Houses recent party for AI CEOs is not a good sign) our legislative bodies will fail to act before the take-over is well underway, at which point it will be nearly impossible for policymakers to do anything. Once an industry crucial to the public interest is dependent on corporate algorithms, even if legislators and regulators intervene to distribute industry control amongst a greater number of companies, the fact of algorithmic dependence will by itself give the class of owner corporations even more immense political power than they already have to resist any meaningful restraint. As cowardly as our elected representatives are in the face of the large tech companies now, how much more subservient will they be when OpenAI owns the license for the managed-care algorithm running the majority of the hospitals in the country, and Microsoft owns the license for the one that coordinates air travel and manages flight patterns for every major airline? Never mind the fact that the government itself is already contracting out various aspects of the bureaucracy to be run on corporate owned algorithms, such as the proprietary identity verification technology already used by 27 states to compel people to submit to face scans in order to receive their unemployment benefits.

And this brings us to the even more encompassing political battle that will be permanently lost once corporate algorithms control the commanding heights of industry. The only way that companies can create algorithmic products in the first place is by amassing billions of pieces of data about billions of people as they go about their increasingly digital lives, and those products will only continue to work if corporations are allowed to grow and refresh their datasets infinitely. There is an emerging international movement against corporate owned, surveillance-based, digital infrastructure. It includes grassroots groups and civil society organizations, and its backed up by a small but mighty group of scientists people like Emily Bender, Joy Buolamwini, Timnit Gebru, Margaret Mitchell and Meredith Whittaker offering deeply researched critiques of the technologies being developed through massive data collection. But building the power of that movement is going to become exponentially more difficult once surveillance data is necessary for every school day, doctors visit, and paycheck. In such a world, whatever political levers one might still be able to pull to limit the influence of a particular corporate surveillance power, the necessity of entrenched surveillance to any persons ability to get smoothly through their day would no longer be a question. It would just be a fact of contemporary life.

This is the revolution that men like Bill Gates, Sam Altman, Mark Zuckerberg, and Sundar Pichai, and Elon Musk are betting on. Its a future where the tech companies arent really even engaging in economic contestation with each other anymore, but have instead formed a pseudo-sovereign trans-national political bloc that contests for power with nation states. Its much more terrifying, and much less speculative, than the imagined hostile takeover by malevolent, superintelligent digital minds with which we are currently being aggressively distracted. The language of wartime probably is the right language, but recall that its a hallmark of wartime propaganda to attribute to the enemy the motives actually held by the propagandist. We should be worried about the nightmare scenario of a hostile takeover, not by a super intelligent robot army, but by the corporations now operating as a kind of universal fifth column, working against the common good from inside the commons, avoiding detection not by keeping out of sight, but by becoming the thing through which we see.

The chatbots are not themselves the corporate endgame, but they are an important part of the softening of the ground for the endgame. The more we play with ChatGPT, the more comfortable we all become with the digital interfaces with which tech companies plan to replace the industry interfaces that are currently run through or by human beings. Right now, we are all practiced at ignoring the rudimentary versions of the customer service bots that pop up on health insurance websites as we are searching for deeply hidden customer service numbers. But if the chatbots are good enough, if we believe them, trust them, like them, or even love them (!), we will be okay with using them, and then relying on them. Microsoft, Google and OpenAI are releasing draft versions of their chatbots now, not for us to test them, but to test them on us. How will we react if the chatbot says I love you? What are the chatbot outputs that will cause an uproar on Twitter? How can the chatbot combine words to reduce the statistical likelihood that we will question the chatbot? These companies are not just demonstrating the chatbot to the industry players who might eventually want to buy an algorithmic interface to replace trained human beings, they are plumbing the depths of our gullibility, our impotence, and our compliance as targets for exploitation.

The rhetoric accompanying the chatbot parade, about how the capacities of the chatbots to fool human beings should fill us with fear and trembling before the dangerous and perhaps uncontrollable powers of so-called artificial intelligence, is a come-on to the other powerful corporate and institutional actors whom the tech companies hope will buy their products. In the first five minutes of his ABC interview, Sam Altman told his interviewer people should be happy that we are a little bit scared of this. Imagine if a manufacturer of toxic chemicals told you that you should praise him for being aware of the dangers of what he is selling you. This is not something that a person who is actually afraid of their own product says. This is sales rhetoric from someone who knows that there are rich people who will pay a lot of money for a toxic brew, not in spite of that toxicity, but because of it. Its also, like the Future of Life Institute letter, an attempt to preempt real concern or pushback from anyone who has any power or authority not already co-opted by the corporate agenda.

Contemporary culture punishes those who dare to exercise moral judgment about people or entities that are motivated entirely by the urge for material accumulation. But we should still be capable of seeing the mortal dangers of allowing corporations with that motivation to annex all of the structures we depend on to live our lives, take care of each other, and participate in the project of democracy. If we dont want corporations to occupy every important piece of territory in our social, political and economic landscape, we have to start doing a better job of occupying those spaces ourselves. There are institutions whose job it is supposed to be to engage in independent research, thinking and writing about the rich and powerful. We have to demand that they do the necessary work to investigate and expose the real threats represented by chatbots and the icebergs they rode in on, threats which have absolutely nothing to do with smarter-than-human computers. If journalists, academics, government agencies, and nonprofits supposedly serving public interest wont do this work, we will have to organize ourselves to undertake it outside established civic and political structures.

This may be very difficult, given how far gone we already are down the solidarity-destroying spiral of social and economic inequality. But even if the laws are hollow, and the government is captured, and the judges are working hard to deliver us to pure capitalist theocracy, we are still here andhowever much we seem to want to forget itwe are still real. Lets find ways to impose the reality of our human minds and bodies in the way of the nihilist billionaires conquest for algorithmic supremacy. Lets do it even if we secretly believe that they are right and that their victory is inevitable. Lets remind them what the word revolution really means by marching in the streets and organizing in church and library basements. Instead of letting the IRS scan all our faces, lets learn calligraphy and send in ten million parchment tax returns. Lets fill the internet with nonsense poems and song lyrics written under the influence, and so many metaphors that the chatbots will start going apple, I mean moon, I mean apple. Lets gather in the Hawaiian gardens the cyber imperialists took from native people and build a campfire across which to tell each other stories of the world we dream of making for our childrens children. In the morning lets go home together, and let that fire burn.

Emily Tucker is the Executive Director at the Center on Privacy & Technology at Georgetown Law, where she is also an adjunct professor of law. She shapes the Centers strategic vision and guides its programmatic work. Emily joined the Center after serving as a Teaching Fellow and Supervising Attorney in the Federal Legislation Clinic at the Law Center. Before coming to Georgetown, Emily worked for ten years as a movement lawyer, supporting grassroots groups to organize, litigate, and legislate against the criminalization and surveillance of poor communities and communities of color. She was Senior Staff Attorney for Immigrant Rights at the Center for Popular Democracy (CPD), where she helped build and win state and local policy campaigns on a wide range of issues, including sanctuary cities, language access, police reform, non-citizen voting, and publicly funded deportation defense. Prior to CPD, Emily was the Policy Director at Detention Watch Network, where she now serves on the Board. Emilys primary area of legal expertise is the relationship between the immigration and criminal legal systems, and she is committed to studying and learning from the histories of resistance to these systems by the communities they target. Emily earned a B.A. at McGill University, a Masters in Theological Studies at Harvard Divinity School, and a J.D. at Boston University Law School.

Originally posted here:

Our Future Inside The Fifth Column- Or, What Chatbots Are Really For - Tech Policy Press

Posted in Superintelligence | Comments Off on Our Future Inside The Fifth Column- Or, What Chatbots Are Really For – Tech Policy Press

Elon Musk refuses to ‘censor’ Twitter in face of EU rules – Roya News English

Posted: at 8:40 pm

At a question-and-answer session in front of 3,600 tech fans in Paris, Elon Musk, the CEO of Tesla and SpaceX, rejected the idea of "censorship" of Twitter.

He defended the principle of "freedom of expression" on the social platform that he owns.

He also announced that he wanted to equip the first human being "this year" with neural implants from his company Neuralink, whose technology has just been approved in the United States.

Musk said: "Generally, I was concerned that Twitter was having a negative effect on civilization, that was having a corrosive effect on civil society and so that you know anything that undermines civilization, I think is not good and you go back to my point of like we need to do everything possible to support civilization and move it in a positive direction. And I felt that Twitter was kept moving more and more in a negative direction and my hope and aspiration was to change that and have it be a positive force for civilization. "

"I think we want to allow the people to express themselves (on Twitter, NDLR) and really if you have to say when does free speech matter, free speech matters and is only relevant if people are allowed to say things that you don't like, because otherwise it's not free speech. And I would take that if someone says something potentially offensive, that's actually OK. Now, we're not going to promote those you know offensive tweets but I think people should be able to say things because the alternative is censorship. And then, and frankly I think if you go down the censorship, it's only a matter of time before censorship is turned upon you," he explained.

He spoke about the neural implant saying: "Hopefully later this year, we'll do our first human device implantation and this will be for someone that has sort of tetraplegic, quadraplegic, has lost the connection from their brain to their body. And we think that person will be able to communicate as fast as someone who has a fully functional body. So that's going to be a big deal and we see a path beyond that to actually transfer the signals from the motor cortex of the brain to pass the injury in the spinal cord and actually enable someone's body to be used again."

He also brought up artificial intelligence saying: "AI is probably the most disruptive technology ever. The crazy thing is that you know the advantage that humans have is that we're smarter than other creatures. Like if we've got into a fight with the gorilla, the gorilla would definitely win. But we're smart so, but now for the first time, there's going to be something that is smarter than the smartest human, like way smarter than humans."

"I think there's a real danger for digital super intelligence having negative consequences. And so if we are not careful with creating artificial general intelligence, we could have potentially a catastrophic outcome. I think there's a range of possibilities. I think the most likely outcome is positive for AI, but that's not every possible outcome. So we need to minimize the probability that something will go wrong with digital superintelligence," he added.

He continued: "I'm in favor of AI regulation because I think advanced AI is a risk to the public and anything that's a risk to the public, there needs to be some kind of referee. The referee is the regulator. And so I think that my strong recommendation is to have some regulation for AI. "

Read the rest here:

Elon Musk refuses to 'censor' Twitter in face of EU rules - Roya News English

Posted in Superintelligence | Comments Off on Elon Musk refuses to ‘censor’ Twitter in face of EU rules – Roya News English