Tallahassee Police shifting tactics and will ‘enforce traffic laws’ at unpermitted protests – Tallahassee Democrat

The man who pulled the gun on Black Lives Matter protesters in front of the Historic Capitol Saturday evening will not face charges. Tallahassee Democrat

Tallahassee Police have signaled a shift in tactics to keep order during protests as tensions in the city boil after an altercation at a demonstration on Saturday.

Responding to a series of questions by City Commissioner Jeremy Matlow, TPD said it intends to step up enforcement at protests, particularly ones that don't have proper permits.

"The citys position is that it cannot adequately maintain public safety during unpermitted road closure events," TPD officials said in the meeting."TPD intends to enforce traffic laws and unlawful and unpermitted protests moving forward."

Autoplay

Show Thumbnails

Show Captions

As a Leon County grand jury meets, and aruling on whether TPD officers were justified in their shooting of murder suspect Tony McDade in May appears imminent, a countywide curfew has been enacted in part to try and stem any late-night demonstrations that could become violent.

Back story:

In an interview with local CBS affiliate WCTV, Leon County Sheriff Walt McNeil said the pending grand jury presentments were partially behind his asking for a curfew from 11 p.m. to 5 a.m. through next Tuesday.

In an interview with the Tallahassee Democrat,Police Chief Lawrence Revell said people are free to demonstrate wherever they like in public as long as traffic is not impeded.

The department has provided leeway for protests that have not secured permits, he said, but it often creates a staffing issue where officers are left scrambling to provide a safe environment.

A counter protester fights a protester and pulls out a handgun. Tallahassee Democrat

In protests past, officers have preemptively closed roads anddirected trafficso protesters can safely march. Stretches of many downtown roads and even Apalachee Parkway and Thomasville Road have been closed during protests that have swept the nation and capital citysince the May 25 death of George Floyd at the hands of Milwaukee police.

Were not trying to squelch peoples First Amendment right, Revell said. These protests that are unplanned, violence has shown up at them. And Im not in any way, shape or form condoning anyone walking up or doing anything, but when those are unplanned, we are behind the curve when we are trying to react to those.

A fight broke out Saturday in front of the Florida Capitol in which a man pulled a handgun and was subdued by TPD officers during an unpermitted demonstration that blocked the intersection of Monroe Street and Apalachee Parkway.

A screenshot of the fistfight between a Black Lives Matter protester and a counter-protester that broke out in front of the Capitol.(Photo: Special to the Democrat)

Although TPD issued a statement the following day saying based on the evidence at hand no charges were filed, the State Attorneys Office has confirmed it is still reviewing statements and any available video of the altercation and charges could still be filed.

Back story:Charges could still be filed after incident outside Capitol, state attorney's office says

Contact Karl Etters at ketters@tallahassee.com or @KarlEtters on Twitter.

Never miss a story: Subscribe to the Tallahassee Democrat.

Read or Share this story: https://www.tallahassee.com/story/news/2020/09/03/tpd-signals-shift-enforcing-unpermitted-protests-block-traffic/5702448002/

More:

Tallahassee Police shifting tactics and will 'enforce traffic laws' at unpermitted protests - Tallahassee Democrat

Cincinnati leaders address police reform, efforts to reduce gun violence in the city – WLWT Cincinnati

Cincinnati Mayor John Cranley and police Chief Eliot Isaac held a press conference Thursday to address police reform in the city.The conference comes amid protests across the country, most recently over the shooting of Jacob Blake in Kenosha, Wisconsin.Cranely released the U.S. Conference of Mayors Police Reform and Racial Justice report this week, which he worked on with fellow mayors in Chicago, Tampa and Baltimore. The report, which was sent out this week, addresses the "urgent need to reset the relationship between our police and our residents" by focusing on sustainable recommendations. It comes in the wake of the recent killing of George Floyd and concerns about policing and calls for reform. "The job of a police officer is often dangerous and difficult, and the vast majority perform to the best of their ability and in good faith. But the improper use of force can affect the perceptions of police everywhere. The wrongful actions of individual officers should not blight the entire profession. However, we cannot ignore that there are police departments with systemic problems and that reform, transparency, and accountability have too often been elusive," the report states.The recommendations in the report include funding core policing while considering providing funds to other social services that complement the polices public safety mission.It also addresses use-of-force policies, and asks departments to have policies where officers use minimal amount of force necessary by continuously reassessing the situation to make an appropriate response. It also recommends not using chokeholds, not shooting at moving cars unless in extreme situations and not using deadly force on a fleeing person unless they pose a threat to others.The report also recommends increasing engagement with police and the community through programs and other services. Addressing protests, the report recommends more training on mass gatherings and First Amendment rights. It also recommends departments have designated staff who are trained to respond to mass gatherings. The report also addresses police accountability, recommending initiatives similar to Cincinnati's Citizen Complaint Authority.The CCA takes complaints against the Cincinnati Police Department and uses independent investigators and panels to determine recommendations for the Cincinnati Police Department. Cranley said the city has received money from the state and will use $1 million toward police efforts and reducing gun violence. Isaac said he hopes to increase police presence in hot spot areas, including Over-the-Rhine, where 10 people were shot a few weeks ago."Right now we have to stop the bleeding, when we know violence is taking place in a certain area, we have to respond," Isaac said.The chief said he wants to implement "community safety organizers" in the future to engage and communicate with residents. Isaac said he wants to hear from residents on ways they can improve."I want the input, I want the involvement of the community at large," Isaac said. Cranley said he is going to give the report to Isaac and let the department look it over and respond.

Cincinnati Mayor John Cranley and police Chief Eliot Isaac held a press conference Thursday to address police reform in the city.

The conference comes amid protests across the country, most recently over the shooting of Jacob Blake in Kenosha, Wisconsin.

Cranely released the U.S. Conference of Mayors Police Reform and Racial Justice report this week, which he worked on with fellow mayors in Chicago, Tampa and Baltimore.

The report, which was sent out this week, addresses the "urgent need to reset the relationship between our police and our residents" by focusing on sustainable recommendations.

It comes in the wake of the recent killing of George Floyd and concerns about policing and calls for reform.

"The job of a police officer is often dangerous and difficult, and the vast majority perform to the best of their ability and in good faith. But the improper use of force can affect the perceptions of police everywhere. The wrongful actions of individual officers should not blight the entire profession. However, we cannot ignore that there are police departments with systemic problems and that reform, transparency, and accountability have too often been elusive," the report states.

The recommendations in the report include funding core policing while considering providing funds to other social services that complement the polices public safety mission.

It also addresses use-of-force policies, and asks departments to have policies where officers use minimal amount of force necessary by continuously reassessing the situation to make an appropriate response. It also recommends not using chokeholds, not shooting at moving cars unless in extreme situations and not using deadly force on a fleeing person unless they pose a threat to others.

The report also recommends increasing engagement with police and the community through programs and other services.

Addressing protests, the report recommends more training on mass gatherings and First Amendment rights. It also recommends departments have designated staff who are trained to respond to mass gatherings.

The report also addresses police accountability, recommending initiatives similar to Cincinnati's Citizen Complaint Authority.

The CCA takes complaints against the Cincinnati Police Department and uses independent investigators and panels to determine recommendations for the Cincinnati Police Department.

Cranley said the city has received money from the state and will use $1 million toward police efforts and reducing gun violence.

Isaac said he hopes to increase police presence in hot spot areas, including Over-the-Rhine, where 10 people were shot a few weeks ago.

"Right now we have to stop the bleeding, when we know violence is taking place in a certain area, we have to respond," Isaac said.

The chief said he wants to implement "community safety organizers" in the future to engage and communicate with residents.

Isaac said he wants to hear from residents on ways they can improve.

"I want the input, I want the involvement of the community at large," Isaac said.

Cranley said he is going to give the report to Isaac and let the department look it over and respond.

Read the original post:

Cincinnati leaders address police reform, efforts to reduce gun violence in the city - WLWT Cincinnati

The US is Determined to Make Julian Assange Pay for Exposing the Cruelty of Its War on Iraq – CounterPunch

Drawing by Nathaniel St. Clair

OnSeptember 7, 2020, Julian Assange will leave his cell in Belmarsh Prison in London and attend a hearing that will determine his fate. After a long period of isolation, he was finally able to meet his partnerStella Morisand see their two sonsGabriel (age three) and Max (age one)on August 25. After the visit, Morissaidthat he looked to be in a lot of pain.

The hearing that Assange will face has nothing to do with the reasons for his arrest from the embassy of Ecuador in London on April 11, 2019. He was arrested that day for hisfailureto surrender in 2012 to the British authorities, who would have extradited him to Sweden; in Sweden, at that time, there were accusations of sexual offenses against Assange that weredroppedin November 2019. Indeed, after the Swedish authorities decided not to pursue Assange, he should have been released by the UK government. But he was not.

The true reason for the arrest was never the charge in Sweden; it was the desire of the U.S. government to have him brought to the United States on a range of charges. On April 11, 2019, the UK Home Office spokespersonsaid, We can confirm that Julian Assange was arrested in relation to a provisional extradition request from the United States of America. He is accused in the United States of America of computer-related offenses.

Manning

The day after Assanges arrest, the campaign group Article 19 published astatementthat said that while the UK authorities had originally said they wanted to arrest Assange for fleeing bail in 2012 toward the Swedish extradition request, it had now become clear that the arrest was due to a U.S. Justice Departmentclaimon him. The U.S. wanted Assange on a federal charge of conspiracy to commit computer intrusion for agreeing to break a password to a classified U.S. government computer. Assange was accused of helping whistleblowerChelsea Manningin 2010 when Manning passed WikiLeaksled by Assangean explosive trove of classified information from the U.S. government that contained clear evidence of war crimes. Manning spent seven years in prison before her sentence wascommutedby former U.S. President Barack Obama.

While Assange was in the Ecuadorian embassy and now as he languishes in Belmarsh Prison, the U.S. government has attempted to create an air-tight case against him. The U.S. Justice DepartmentindictedAssange on at least 18 charges, including the publication of classified documents and a charge that he helped Manning crack a password and hack into a computer at the Pentagon. One of theindictmentsfrom 2018makes the case against Assange clearly.

The charge that Assange published the documents is not the central one, since the documents were also published by a range of media outlets such as the New York Times and the Guardian. The keychargeis that Assange actively encouraged Manning to provide more information and agreed to crack a password hash stored on U.S. Department of Defense computers connected to the Secret Internet Protocol Network (SIPRNet), a United States government network used for classified documents and communications. Assange is also charged with conspiracy to commit computer intrusion for agreeing to crack that password hash. The problem here is that it appears that the U.S. government has no evidence that Assange colluded with Manning to break into the U.S. system.

Manning does not deny that she broke into the system, downloaded the materials, and sent them to WikiLeaks. Once she had done this, WikiLeaks, like the other media outlets, published the materials. Manning had a very trying seven years in prison for her role in the transmission of the materials. Because of the lack of evidence against Assange, Manning was asked to testify against him before a grand jury. She refused and now is once more inprison; the U.S. authorities are using her imprisonment as a way to compel her to testify against Assange.

What Manning Sent to Assange

On January 8, 2010, WikiLeaksannouncedthat it had encrypted videos of U.S. bomb strikes on civilians. The video, later released as Collateral Murder, showed in cold-blooded detail how on July 12, 2007, U.S. AH-64 Apache helicopters fired 30-millimeter guns at a group of Iraqis in New Baghdad; among those killed were Reuters photographer Namir Noor-Eldeen and his driver Saeed Chmagh. Reuters immediately asked for information about the killing; they were fed the official story and told that there was no video, but Reuters futilelypersisted.

In 2009, Washington Post reporter David Finkel publishedThe Good Soldiers, based on his time embedded with the 2-16 battalion of the U.S. military. Finkel was with the U.S. soldiers in the Al-Amin neighborhood when they heard the Apache helicopters firing. For his book, Finkel had watched the tape (this is evident frompages 96 to 104); he defends the U.S. military, saying that the Apache crew had followed the rules of engagement and that everyone had acted appropriately. The soldiers, he wrote, were good soldiers, and the time had come for dinner. Finkel had made it clear that a video existed, even though the U.S. government denied its existence to Reuters.

Thevideois horrifying. It shows the callousness of the pilots. The people on the ground were not shooting at anyone. The pilots fire indiscriminately. Look at those dead bastards, one of them says, while another says, Nice, after they fire at the civilians. A van pulls up at the carnage, and a person gets out to help the injuredincluding Saeed Chmagh. The pilots request permission to fire at the van, get permission rapidly, and shoot at the van. Army Specialist Ethan McCordpart of the 2-16 battalion that had Finkel embedded with themsurveyed the scene from the ground minutes later. In 2010, McCordtoldWireds Kim Zetter what he saw: I have never seen anybody being shot by a 30-millimeter round before. It didnt seem real, in the sense that it didnt look like human beings. They were destroyed.

In the van, McCord and other soldiers found badly injured Sajad Mutashar (age 10) and Doaha Mutashar (age five); their father, Salehwho had tried to rescue Saeed Chmaghwas dead on the ground. In the video, the pilot saw that there were children in the van; Well, its their fault for bringing their kids into a battle, he says callously.

Robert Gibbs, the press secretary for President Barack Obama,saidin April 2010 that the events on the video were extremely tragic. But the cat was out of the bag. This video showed the world the actual character of the U.S. war on Iraq, which the United Nations Secretary-General Kofi Annan hadcalledillegal. The release of the video by Assange and WikiLeaks embarrassed the United States government. All its claims of humanitarian warfare had no credibility.

The campaign to destroy Assange begins at that point. The United States government has made it clear that it wants to try Assange for everything up totreason. People who reveal the dark side of U.S. power, such as Assange andEdward Snowden, are given no quarter. There is a long list of peoplesuch as Manning,Jeffrey Sterling,James Hitselberger,John Kiriakou, andReality Winnerwho, if they lived in countries being targeted by the United States, would be called dissidents. Manning is a hero for exposing war crimes; Assange, who merely assisted her, is being persecuted in plain daylight.

On January 28, 2007, a few months before he was killed by the U.S. military, Namir Noor-Eldeen took aphotographin Baghdad of a young boy with a soccer ball under his arm steps around a pool of blood. Beside the bright red blood lie a few rumpled schoolbooks. It was Noor-Eldeens humane eye that went for that photograph, with the boy walking around the danger as if it were nothing more than garbage on the sidewalk. This is what the U.S. illegal war had done to his country.

All these years later, that war remains alive and well in a courtroom in London; there Julian Assangewho revealed the truth of the killingwill struggle against being one more casualty of the U.S. war on Iraq.

This article was produced byGlobetrotter, a project of the Independent Media Institute.

More:
The US is Determined to Make Julian Assange Pay for Exposing the Cruelty of Its War on Iraq - CounterPunch

Objective-C, Golang, and Windows PowerShell lead list of 15 highest-paying programming languages – TechRepublic

Find out which programming languages pay the most, and which ones are growing the fastest, according to a new Upwork report.

Image: Getty Images/iStockphoto

Despite COVID-19's impact on the economy, data from the online talent platform Upwork reveals that high earnings for open jobs are available to developers operating as independent professionals. Upwork compiled its top 15 highest-paying programming languages for tech professional positions by analyzing the highest average hourly rates on Upwork.com.

Top languages demand more than $66 per hour on average, translating to an annualized pre-tax income of more than $137,000 (based on a 40-hour workweek). A comparison of these top language rates to 2018 Bureau of Labor Statistics average wage by occupation data reveals that $66 per hour is higher than the average wage for a web, mobile, or software developer across US metro-areas and even in the expensive metros at $39.58 and $52.09, respectively.

SEE:Cheat sheet: Facebook Data Privacy Scandal (free PDF)(TechRepublic)

The following are based on average hourly freelance rate:

Remote work quickly became an enforced necessity during the darkest days of the pandemic. The results were so positive that some employers (i.e. Twitter) declared that all offices will continue to be virtual when COVID-19 restrictions are lifted. Other companies have given employees the choice to continue working from home (WFH) either full time, or dividing time between on-site and WFH.

SEE: COVID-19 workplace policy (TechRepublic Premium)

Because of the favorable shift toward remote work, companies developed a reliance on the services of independent skilled professionals. There has been a surge to find top technical talent, as evidenced on job talent platforms like Upwork. They're in desperate demand, which gives "talent" the luxury of deciding to stay in their positions, negotiate for better benefits or look to move to another company.

Of the high-paying programming languages, the skills with the highest year-over-year growth, in terms of contract volume on the platform, are:

"Specialization in a particular language or discipline can help differentiate one candidate from another," said Mike Paylor, VP of Engineering and Product at Upwork. "Skills in mobile development languages such as Objective-C or Kotlin are particularly in demand as well as relatively modern languages such as Go."

If you're looking to shift careers, whether it be from an entirely unrelated industry or a change within the world of technology, brushing up on skills is always welcome. It's a great "credit" to add to your resume and will make you all the more appealing in recruitment.

If finances are an issue, there are a surprising amount of no-cost options. I wrote a piece for TechRepublic mid-May, The top free online tech classes to advance your IT skills. Here's what we found:

The ever-changing tech world is a popular arena in which to explore courses, with education outlets that offer free online tech classes to advance IT skills. These include:

"We do not see the need for software developers waning," Paylor said. "In fact, it is growing faster than most professions. While the demand for one programming language over another will change over time, there doesn't appear to be any slowing of this trend. As organizations continue to face challenges, the need for new customer-facing applications that work across a variety of platforms and devices will only become more crucial, as will the professionals who are bringing those projects to life."

From the hottest programming languages to the jobs with the highest salaries, get the developer news and tips you need to know. Weekly

Read the original here:
Objective-C, Golang, and Windows PowerShell lead list of 15 highest-paying programming languages - TechRepublic

Data Networks Work to Shore up Account Access As Regulators Eye Rules of Their Own – Digital Transactions

The data networks that connect payments and other financial apps to users bank accounts are scrambling to standardize data access by moving to application programming interfaces and away from an older, cruder form of access known in the business as screen scraping. The effort comes as financial apps gain popularity and regulators like the Consumer Financial Protection Bureau mull rules for data sharing, the heart of what the industry calls open banking.

The concern with screen scraping is that it relies on the use of passwords and other personal credentials held by consumers to link apps like Venmo and Square Inc.s Cash App to accounts at financial institutions. Fearing security issues, a lot of the industry is starting to transition from credential-based to API-based access, says John Pitts, global head of policy at Plaid Inc., a major data network.

A big move in that direction came Wednesday with news from Lehi, Utah-based data network MX Technologies Inc. that it is introducing a set of open-source software offerings collectively called MX Open. The new platform includes an API that can allow financial institutions and fintechs to connect users to their financial data. MX announcedearlier this month that it had built a network of more than 50,000 connections to financial institutions and fintechs, outdistancing the estimated number of links established by other data networks.

MX Open gives organizations the tools they need to define and launch their open-finance strategy and innovate faster with the vendors and technology providers that will serve their customers best, said Brett Allred, chief product officer at MX, in a statement.

Then, on Thursday, financial-services technology giant Fiserv Inc. announced AllData Connect, its own solution for data sharing among fintechs and banks. This process can be difficult for financial institutions to support if screen scraping impairs online banking performance, or when login credentials are stored at unaffiliated third parties, said Paul Diegelman, vice president of digital Payments and data aggregation at Fiserv, in a statement. AllData Connect gives financial institutions the ability and insight they need to confidently empower consumers to share their financial account information.

Even so, Plaids Pitts estimates that about 90% of data sharing is occurring outside of APIs, typically by means of acquiring credentials from users. Only the top 10 to 20 banks have made progress developing their own APIs, he notes. If there were a prohibition on screen-scraping, there would be this two-tier system where customers of small community banks wouldnt have access. Plaid offers its own API called Plaid Exchange.

Visa Inc. said in January it was paying $5.3 billion to acquire San Francisco-based Plaid in a deal that is undergoing review both in and outside the United States. The Competition and Markets Authority in the United Kingdom granted clearance last month.

At the same time, the Financial Data Exchange, a trade group for open banking, is working on a cross-industry standard API for data sharing. MX and Plaid are among the more than 100 members of the Reston, Va.-based FDX, which operates under the auspices of the 21-year old Financial Services Information Sharing and Analysis Center (FS-ISAC).

Industry standards are one thing, but the federal government is also likely to lay out its own rules. The CFPB in July said it plans to set out a so-called advance notice of proposed rulemaking for consumer-permitted access to financial data. The Bureaus interest in the matter rests on Section 1033 of the Dodd-Frank Act, which bears on consumers access to, and use of, their financial records.

See the article here:
Data Networks Work to Shore up Account Access As Regulators Eye Rules of Their Own - Digital Transactions

Kong Enterprise Focused to Be the ‘Switzerland of Connectivity’ – thenewstack.io

Service connectivity software provider Kong has released Kong Enterprise 2.1 with a variety of new features, but at the heart of those features, explained Kong Chief Technology Officer and co-founder Marco Palladino, is the same theme that has driven the company since its origins: connectivity. Kong was born as an open source project when API marketplace Mashape transitioned from monolith to microservices and needed a gateway that could run in a decoupled and distributed way across its new containerized infrastructure. This latest version, at its core, expands upon Kongs ability to work in numerous environments and communicate between various types of workloads.

Without taking care of connectivity, your organizations are not going to be able to successfully transition to modern applications, let alone microservices, said Palladino. I like to think of Kong as the Switzerland of connectivity. We are neutral and play nicely with all the other cloud vendors in order to be able to provision this connectivity that fundamentally has no boundaries.

This neutrality is key to Kong Enterprise 2.1, which introduces Hybrid Mode, allowing Kong users to run the Kong data plane not only across data centers, clouds, and geographies without a central Cassandra deployment, but also providing a central control plane for both virtual machines (VMs) and containers running on those disparate locations and services.

When we look at the role of the infrastructure architect, those are the guys that have to provision and support all the application teams that are doing anything within the organization. Some of those things are running on one cloud, some of those things are going to be running on Kubernetes, some of them are going to be running on legacy VMs. So how do we support them all, regardless of the underlying infrastructure? We can more easily support all of these environments by supporting data planes running in each one of these different architectures, clouds, or geographies, explained Palladino. Without this feature, what organizations have been doing is using different technologies to manage connectivity in different silos. By providing a hybrid mode, we can reduce that fragmentation, therefore, improve the reliability of how connectivity is being managed.

Just like how we use Kubernetes to abstract away data center operations, our customers are using Kong enterprise to abstract away connectivity from all of their environments. Marco Palladino, Kong

Palladino also pointed to Kong Enterprises ability to be used as a service mesh ingress and egress with an integration with Kong Mesh, which is built on top of the Envoy-based Kuma service mesh that joined the Cloud Native Computing Foundation (CNCF) earlier this year, as a key development. While Kong had originally set out to build this functionality on top of Istio, another Envoy-based service mesh, Palladino said that it ended up being too difficult, with Istio being too hard to scale and to use, as well as not supporting multiple meshes and treating VMs as second class citizens. By comparison, Kong Enterprise 2.1 provides out of the box enterprise-level support for Kuma, which treats VMs as first-class citizens, is Kubernetes-native, and will integrate directly with the Kong API Gateway.

One final change that Palladino highlighted related to the ability to extend Kong with Kong plugins. Previously, those plugins had to be written in either Lua or C, but Kong has released an SDK that now adds support for the Go programming language.

Looking ahead, Palladino said that Kong hopes to expand its functionality to help with uptime and expand its role as a control plane.

We have a suite of machine learning products that today do anomaly detections as well as auto-documentation. We really want our connectivity platform to be the Waze of service connectivity, because we do have data planes running at the edge as a sidecar, and we have data planes running next to every single service in your organization, said Palladino. We can also self-heal and keep the uptime over the overall architecture infrastructure, without any human intervention. What we are going to be working on more and more is making sure that humans are not going to be a dependency for keeping the uptime of the overall enterprise.

The Cloud Native Computing Foundation is a sponsor of The New Stack.

Feature image by skeezefromPixabay.

Read the original here:
Kong Enterprise Focused to Be the 'Switzerland of Connectivity' - thenewstack.io

What playing Fall Guys with a sex toy means for sex, tech, and consent in games – The Daily Dot

Opinion

The other night, I threw on a pair of cat ears, grabbed a hand-me-down punk denim hoodie, and made history as the first sex worker to stream sex toy support for bean-based battle royale multiplayer game Fall Guys: Ultimate Knockout. Consider it part performance art, part cam show, and part zany online novelty.

The stream showed me using my We-Vibe Wish (the same toy I used for my Zoom porn shoot) while playing through several rounds of Fall Guys. In the stream, I redirected my DualShock controllers rumble effects straight to my We-Vibe, so every time I jumped, fell, or ran into an obstacle, I got a nice little jolt that vibrated against my skin. Half of the stream involved me giggling and squirming, and the other half involved me fixing all of the Wishs technical difficulties, of which there were many. But frustrations aside, it worked, and I streamed it, and I had a very fun time. (You can watch a short clip of my first round on ManyVids, although its obviously NSFW.)

All of this isnt supported by Fall Guys nor We-Vibe right out of the box, however. Thanks to Kyle qDot Machulis open-source Buttplug.io programming library and his Intiface Game Haptics Router, I was able to redirect the controller rumble feature to my We-Vibe as I played. Fall Guys developer Mediatonic and publisher Devolver Digital probably werent expecting sex to be added to their bean-based multiplayer jump-em-up, but here we are.

Sex tech and games stories tend to elicit some interesting reactions. When Machulis hooked up the Buttplug software to Animal Crossing: New Horizons, it was received primarily with shock and disgust. But sex tech in games isnt a new concept. Two decades ago, an Official PlayStation Magazine article outlined the history of sex in games. One entry, and perhaps the most noteworthy for centering cis womens sexual experiences, included the Japan-exclusive Rez Trance Vibrator.

Years before Fall Guys was announced and nuanced sex-in-games discourse entered the mainstream games media world, USC Interactive Media and Games professor Miyuki Jane Pinckard wrote in detail about using the Trance Vibrator with her partner after picking up the game from Akihabara, Tokyo. The stark gonzo writing Pinckard brought to the table 20 years ago still shines in comparison to the tame, chaste nature of the contemporary games journalism world.

But dont you think this trance vibrator extension is so your girlfriend can get off while youre playing the game? Or so a girl gamer can get off while shes playing the game? Pinckard asked her partner, to which he responded: It was a bit odd, my fingers were working the controls, but they were also kind of working you.

Thats kinky as hell. And also incredibly beautiful. When Wired contributor Julie Muncy returned to Rezs Trance Vibrator in 2017, she cited Pinckards writing to talk about how games rarely consider female sexual pleasure, to the point where most commentary on it is either pessimistic or reactionary.

When Machulis introduced his sex toy features for Animal Crossing: New Horizons, gamers mocked the idea, perhaps because Animal Crossing is a game primarily played by female gamers. And whether were camgirls, Twitch streamers, or healsluts, women arent supposed to experience sexual pleasure from video games. This is despite the fact that women have long used rumble features for their own erotic, if not slightly awkward, pleasure.

Ever since they invented the whole rumble pack/vibrating technology in controllers, [masturbation has] been on my mind, and sometimes in my practice (fellow game girls, you know what Im talking about), Pinckard wrote in 2002. I did discover that Halo was a pretty good game for this (although for not much else), because as the gunner in the Warthog, you have unlimited ammo and you can just park yourself somewhere and rat-tat-tat to your hearts content.

There are, to be clear, an enormous number of ethical issues around inserting sex toys into your multiplayer games. During the stream, I didnt really think much about why I wanted to keep colliding against other contestants or grabbing them (let alone being grabbed). Then, it dawned on me that I was engaging other players in an interaction that they hadnt actually been fully informed about. Thats the in-game equivalent of hugging as many people as possible so you can feel your butt plug wiggle inside of you, which is not a great thing to do in public.

This sparked a much larger question about whether Fall Guys is even an appropriate game to play with a sex toy at all. While Im inclined to think passively playing the game with an internet-of-things sex toy isnt necessarily a problem, just as playing Overwatch with a dildo isnt, actively grabbing players and bumping into them is a huge no-go. Use the tech responsibly.

Theres no private lobbies or single player. Its all public, all randos, all the time, and they can trigger haptics via holding or collisions, Machulis wrote. It is different than, say, wandering into a game with voice chat and putting your mic right next to the toy thats reacting haptically to events from others. But its still, in the general sense, not a super clear cut great thing to be doing.

My Fall Guys vibe days may be on pause, but as a queer sex writer, Im still invested in our cultural understanding of what eroticism does (or in this case, doesnt) look like. Games themselves are tools for pleasure, just as any form of play (intended or otherwise). Thats not to say all video games are sexual, let alone sex toys. But they are undeniably erotic because eroticism is about so much more than just physical sexual pleasure.

In Video Games Have Always Been Queer, Bonnie Ruberg defines eroticism as a sensual, aesthetic appreciation that comes from the oscillation of shared intimacy between yourself and another. Fall Guys tense elimination matches are nothing more than the oscillation between near and far, between control and relinquishing, between being giver and receiver, complete with delays that build to release, to paraphrase Rubergs inspiration Laura Marks.

In The Ethical Slut, Janet Hardy and Dossie Easton echo this point, saying eroticism pervades in everything all the time. We inhale and exude it by our sheer existence. And that includes play. We think erotic energy is everywhere, they write, in the deep breath that fills our lungs as we step out into a warm spring morning, in the cold water spilling over the rocks in a brook, in the creativity that drives us to paint pictures and tell stories and make music and write books.

Suddenly, many, many games take on this erotic energy too. In Doom Eternal, its the ero guro-esque glory kills. In Jackbox Party Pack 6s Push the Button, its the titillating thrill of being a hunted Other. Buttplug.ios previous viral demo, Animal Crossing, is perhaps the best example to date. A significant portion of the game is about the joy of being known, of being part of a community, of being one with a safe world where empathy comes first. Players engage in this eroticism by being social creatures and actively seeking out friendship with their fellow villagers. Many of the games characters receive suggestive artwork as a result, especially from queer furries and independent, female-friendly smut illustrators. Thats because Animal Crossing is fundamentally about connection between peopleor people-like animals (marry me, Freya). And when intimacy emerges, eroticism soon follows.

You dont necessarily need an open-source internet-of-things programming library to realize games are de facto sex tech in their own right. In fact, the most erotic gaming experience is happening right now on multiplayer games that arent compatible with Buttplug.io, such as Riot Games first-person shooter Valorant.

One of my favorite Valorant characters, Reyna, is a high femme supernatural chicana military agent who consumes the souls of her opponents. She is as drop-dead gorgeous as her voice lines are dripping with desire: She talks endlessly about how hungry she is, how her otherworldly cravings linger, how she wants more, more, more than she can ever have. This isnt just a reflection of her gameplay roleshes an entry fragger that only stays alive by killing as many opponents as possiblebut the impetus for a deeply intimate experience playing as her or alongside her (let alone against her). My We-Vibe Wish doesnt rumble when I play as Reyna, but when I hear her voice lines in-game, my thighs quiver just like when I played Fall Guys. Her dominant, monstrous presence is irresistible to a lesbian such as myself, and theres a lengthy history of sapphic yearning over monstrous women to back up the point.

Throwing a vibe against your clit or placing a butt plug inside yourself may feel viscerally serene, but so is choosing your favorite character or performing a clean headshot and being rewarded with a femme domme line. Thats because play is intimate, and intimacy is erotic by nature. So why not unlock the parts of ourselves that drive us to connect, to engage, to play, and to make love with each other? Because if we accept all forms of play engage in eroticism, then we start to better understand why we play games in the first place.

17 feminist porn sites for erotic empowerment

The best sites for ethical, fair trade porn

Sex in quarantine? Expert advises BDSM during coronavirus pandemic

Distressed? Meet Dominatrix Simulators therapeutic dominatrix session

Read more here:
What playing Fall Guys with a sex toy means for sex, tech, and consent in games - The Daily Dot

Ortus Solutions Releases Final Version of ColdBox 6 – PR Web

THE WOODLANDS, Texas (PRWEB) September 02, 2020

After a year of work, Ortus Solutions, Corp announced the final release of ColdBox 6. ColdBox, a conventions-based MVC Framework, provides a set of reusable code and tools that can be used to increase development productivity, as well as a development standard for working in team environments. The widely used platform which includes its standalone libraries: WireBox, CacheBox, and LogBox, is used by Fortune 500 companies including Adobe and LOral and by government entities such as NASA, the Department of Labor, and the FAA.

This release sports five dramatic new features as it keeps pushing for more modern and sustainable approaches to web development. We are so excited to bring another major release of our ColdBox Platform version 6.0. This new platform will be the standard for web applications and APIs built on ColdFusion (CFML) for the next 5-10 years as we will be catapulting them to delivering cloud/container/API applications that are secure, flexible, modular, scalable and more importantly, MODERN, said Luis Majano, CEO of Ortus Solutions.

REST is now a first class citizen in ColdBox with our new RestHandler capabilities so developers can create and generate RESTFul applications with simplicity and elegance. We have completely re-imagined exception handling and we now provide developers with a Modern Exception Handling experience called Whoops! This includes visual code navigations, which are extensible and inline. It will also transform the development experience and accelerate bug tracking. Additionally, we have also harnessed the power of Java Completable Futures and parallel programming to give ColdFusion developers an easy and scalable API to introduce asynchronicity and parallel constructs to their programming. Everything can be non-blocking with ColdBox 6, and almost everything internally is non-blocking, concluded Majano.

ColdBox 6 and its HTML rendering enhancements provide a five-fold speed increase for certain applications. It is a must for companies to adopt in order to transform their legacy and monolithic applications to modern, scalable and hierarchical applications or micro services.

AboutOrtus Solutions is a minority-owned Christian business founded in 2006 with the vision of empowering developers with great open source tools and empowering clients with scalable and robust applications. It has a proven track record of successful web application development from small scale to mission critical applications, software architecture, website design, training and support services.

Share article on social media or email:

See more here:
Ortus Solutions Releases Final Version of ColdBox 6 - PR Web

AI’s increasing contributions to patent translation – will humans be replaced? – Lexology

In recent years, the rapid development of big data, cloud computing and artificial intelligence (AI) technology has brought about both opportunities and challenges to all walks of life.

In the vertical field of patent translation, the use of AI technology is gradually freeing translators from the more laborious work, and enabling them to dedicate their time to the more crucial aspects.

So, how does AI make contributions to patent translation?

It will be discussed in the following three aspects:

1. Difficulties in machine translation of patents

2. Three basic steps of machine translation

3. Translation model building

01 Difficulties in machine translation of patents

A patent or patent application basically consists of: an abstract, claims, and a description, each having different expression rules with terminology that can be difficult to understand.

For example, each claim shall always be organized into one sentence, no matter how complicated the sentence structure or how long the sentence is.

Such a sentence, though meeting the requirements in grammar and syntax, can be hard to read and sometimes incoherent.

Therefore, adding enumeration commas, commas or semicolons into the sentence in the proper places to segment the sentence appropriately as well as ending the whole sentence with a full stop could improve the readability of the claim and avoid ambiguity or misinterpretation.

However, this may bring tremendous challenges to machine translation technology which is still in development. In particular, machine translation still requires sentences having complete structures and meanings for training, and there are limits to the sentence length training with longer sentences may result in poorer translation quality and mistranslations.

Whats more, a patent document requires a high consistency of terms throughout the whole text, but machine translation is completed sentence by sentence, without understanding the context like a human, and therefore the consistency of terminology is also a challenge to the industry. This is of course a threshold for vertical industries. The one who is able to better respond to these challenges will be in a dominant position in the market.

02 Three basic steps of machine translation

The implementation process of machine translation basically includes three steps: data pre-processing, machine translation, and post-processing.

Data pre-processing mainly includes performing coding unification and text normalization on the aligned bilingual sentences, so as to meet the requirements for adaptation to translation models, for instance, amending numbers, symbols, date formats and non-standard expressions into the standard form and style.

The pre-processing stage is important for improving the quality of machine translation, and has a significant impact on the translation result. The less data noises, the better the translation quality.

Furthermore, attention should also be paid to the characteristics of different translation models, so as to perform targeted adjustment of the data pre-processing method.

Machine translation is the process of translating inputted text data into the target language. Here, the most important part of machine translation is called translation model. A translation model is a model formed by deep learning based on mass aligned bilingual sentences through AI algorithms.

Therefore, it would be better to prepare as much bilingual data having high quality and complicated structures as possible, so as to enable the model to have a higher generalization ability and better comprehensive performance.

Algorithm optimization and model training should be performed alternatively to form a spirally rising iterative process, optimizing the algorithm and parameters by iterative trainings. Transformer is an excellent open-source neural network model, which can be implemented using TensorFlow and PyTorch.

A relatively mature tool - TensorFlow serving - is used for deployment of the translation model, and PythonAPI is used for invocation. Once successfully published, the translation model is able to provide services.

Post-processing is to convert and re-arrange the translation result, splice the modeling units and process special symbols, so as to make the translation result readable. Moreover, post-processing may also include word segmentation checking, BLEU scoring, word count calculating, etc.

All these processings serve to guarantee a better upgrade of the translation model in the future. Post-processing plays the role of assisting machine translation and can improve translation normalization, but cannot improve the translation quality fundamentally. So far at the current stage of development of machine translation, post-processing is still a necessary procedure.

03 Translation model building

Training a translation model generally involves three aspects, namely linguistic data (i.e. the aligned bilingual sentences) processing, algorithm writing and model training, and deployment.

Linguistic data processing

Linguistic data processing is the first step of machine learning, also called parallel corpus building. Corpus building includes aligning and storing sentences in the source language and in the target language in a one-to-one manner.

Only when the sentences are aligned exactly, can the linguistic data be used for training of the translation model. In addition, extremely long sentences in the linguistic data also need to be processed for effective segmentation of sentences. In a technical aspect, regularization and denoising are also necessary.

Furthermore, there is no consensus about the influence of Chinese word segmentation on machine translation. Many research papers online suggest that word segmentation, if anything, leads to a better translation result.

Manual processing and technological processing mutually promote each other, and during continuous upgrading of the translation model, the quality of linguistic data plays a decisive role.

Algorithm writing and model training

In the development history of machine translation, the network model structure developed and evolved from Seq2Seq, Transformer to BERT.

At the very beginning, deep learning was completed on the basis of Seq2Seq, and CNN and then RNN activated smart machine translation. The Transformer model greatly improves the quality of smart machine translation, overcomes the defect of slow training of RNN, which is often criticized, and achieves fast parallel processing using a self-attention mechanism.

In addition, Transformer enables deep learning, sufficiently explores the characteristics of a DNN model, and improves the translation accuracy of the model. The increasingly popular BERT is also constructed on the basis of Transformer.

Transformer is a network structure published by Google in 2017 to replace RNN and CNN. It is the first model built only using attention, and enables direct acquisition of global information - this is different to RNN which obtains a global information link by continuous recursion, and different to CNN which merely acquires local information; moreover, Transformer supports parallel computing. Therefore, Transformer enables a faster speed, and can also provide better translation results.

Once the network structure is determined, it is necessary to set parameters thereof, such as batch_size, learning_rate, hidden_size, max_length, dropout and num_heads. As for the implementation of the Encoder-Decoder, there are many source codes online for the optimizer, loss value calculation and gradient updating.

After processing the network coding, it is recommended to observe the curves of the visualized graph in logs to check whether the network structure is properly configured. A proper network structure configuration and hyper-parameter setting enable curve convergence within a few hours, as shown in the graph below. Setting of hyper-parameters has a great influence on the learning curves, and the graph shows that different hyper-parameter settings result in big differences in BLEU values trained on the basis of the same data.

TensorFlow board can be used to display a visualized graph, as it is easy to operate, has a good visualization effect, and provides various curves showing different learning results under different hyper-parameters.

Alternative network structure programming and model training form a spirally rising iterative process, as the influence of algorithm selection or parameter settings need to be proven through continuous practice.

Therefore, it is important to understand the model on the basis of algorithm principles and to analyze the data fed back from practice. Only in this way, can we optimize the translation model in the correct way, and the experiences accumulated from iterative debugging practice enable better and thorough understanding of the translation model.

Herein, we list some examples based on our experience: a smooth curve indicates a high quality of linguistic data; a fluctuating curve indicates excessive noises in linguistic data; the more layers the network has, the slower the learning is, but it also means the curve could rise higher later on; the number of layers of the model requires a corresponding amount of data; GPU supports a training speed dozens of times faster than CPU; more data results in a slower decreasing of loss; a better time to adjust dropout is in the middle-to-late period, etc.

Deployment

With regard to the deployment of the translation model, Googles TensorFlow Serving can be used as an application framework. TensorFlow Serving provides, up till now, the most mature and stable application services.

TensorFlow Serving provides a flexible server architecture and supports cluster deployment, aiming to deploy and serve an ML model. A trained model can be used for predication, and TensorFlow Serving is able to export the model in a servable compatible format.

TensorFlow Serving combines the core service components together to construct a GRPC/HTTP server. This server is able to serve multiple ML models (or multiple model versions trained with same data under different parameter settings), invocation of model services is realized via an API interface obtained from an official channel, and an external service interface communicates with TensorFlow Serving end by means of gRPC and RESTfull API, so as to acquire services.

In addition, an official recommendation is to deploy the model services in combination with Docker, so as to enable high speed and convenience. Once the deployment is completed, evaluation of the translation model can be performed.

Conclusion

Throughout the history of the translation industry, the working method has developed from pure hand-writing to computer-assisted translation, and then to the AI translation of today. We believe that the development of AI technology will exert more positive effects on the patent translation industry, and contribute assistance to human translation, rather than replace human translation. The combination of human translation and AI technology will enable the best balance between efficiency and quality.

Premiword Machine Translation (www.premiword.com.cn) AI neural network machine translation based on more than 50 million bilingual sentence pairs from 120 million global patents and tens of thousands of office actions accumulated over the years, supporting Chinese-English and Chinese-Japanese translation and reverse, being expert in translation of patents in most technical fields as well as in translation of patent office actions.

An example of machine translation:

Chinese:

English Translation:

The present technology relates to an information processing apparatus, an imaging control method, a program, a digital microscope system, a display control apparatus, a display control method, and a program.

Japanese Translation:

Read more:
AI's increasing contributions to patent translation - will humans be replaced? - Lexology

The first battery-free Game Boy wants to power a gaming revolution – CNET

Jasper de Winkel has turned his apartment into a factory.

Hunched over a table in his study, de Winkel has spent the last few months with a hot air gun in one hand and fine-tip tweezers in the other. Microscopic components litter his benchtop. With surgical precision, he delicately places them on a circuit board, periodically checking his progress with a magnifying glass. For a factory, it's exceptionally clean. It has to be. His house, in the sleepy Dutch town of Delft, about 30 miles southwest of Amsterdam, is the birthplace of a world first.

The battery-free Game Boy. A video game console powered by a combination of energy from the sun and button-mashing during gameplay.

It's an orange brick about the size of a paperback novel but weighs only half as much as the original Nintendo Game Boy released in 1989. De Winkel, a computer scientist at Delft University of Technology, has been working on building the device for about a year. He calls it his "baby."

We're impersonating the Game Boy.

Josiah Hester

Officially it's dubbed the "Engage" (no relation to Nokia's failed console, I'm told) but the inspiration is obvious. Beside the absence of a battery slot on the back, the device looks exactly like Nintendo's revolutionary handheld. "It was critical from the start of the project that we maintain the feel of a Game Boy," de Winkel says.

De Winkel has been building and sending Engage devices, in a variety of colors, to his collaborators

The "we" de Winkel refers to is an accomplished team of computer scientists including Josiah Hester, from Northwestern University in the US, plus Przemysaw Paweczak and Vito Kortbeek from TU Delft. They're set to unveil their Game Boy for the first time on Sept. 12, during the 2020 virtual UbiComp, an annual conference run by the Association for Computing Machinery.

The handheld device is a "proof by demonstration" that battery-free mobile gaming is possible. It's not a Nintendo product, but it's also not just a simple novelty for researchers, either. Like the original Game Boy, it's designed to spark a revolution. Hester and Paweczak, who lead the project, have been studying energy harvesting and "intermittent computing" devices for years. The Engage is the result of researching and refining this work, and the system is a state-of-the-art, technical marvel.

The choice to redesign the Game Boy is a deliberate one, a considered plot to raise awareness of the intermittent computing field that has so far been confined to the "hardcore programming" crowd and "geeks to the max," according to Paweczak. But there's more at stake than just novelty, awareness or convenience. An even bigger issue looms over the team's work: global heating and the ecological impacts of modern technology.

The system, Hester hopes, will inspire communities from game developers to consumers to radically rethink how the world approaches sustainability and climate change.

"You know what would be cool? If we could make a Game Boy."

That was the dream Josiah Hester offered Jasper de Winkel during a brainstorming session in late 2019, a few months before the pandemic hit. Even then, de Winkel notes, it sounded a little crazy. His first thought was "can we even do that?" The team enlisted the help of Vito Kortbeek, a Ph.D. student under Paweczak at TU Delft, to help with software development.

The Engage is not a one-to-one re-creation of the Game Boy, a console first released by Japanese gaming giant Nintendo 31 years ago. It's a redesign, built from the ground up with modern computing techniques, driven by a Game Boy emulator.

"We're impersonating the Game Boy," says Hester. He explains that the device has been created by coupling existing Game Boy emulation techniques with the latest in energy harvesting and intermittent-computing technology. "This could not have been possible even four or five years ago," he says.

Nintendo didn't respond to a request for comment.

Tetris was bundled with the original Game Boy in America.

Intermittent computing, an emerging field of computer science and engineering, drives the design principles behind the Engage. Unlike batteries, which draw energy until they need to be replaced, intermittent-computing devices use novel energy-harvesting techniques that provide small amounts of power, resulting in devices that only remain ON for seconds, rather than hours. Paweczak says "the whole idea of intermittent computing stems from the fact we should ditch batteries completely."

This is the key to the Engage.

It's a fully operational Game Boy and can play any of the console's titles, from Tetris to Super Mario Land. It harvests energy from five small rows of solar panels on its face and from button presses made by the user. In its present state, that's enough to power the Engage for around 10 seconds, depending on the game. Then, losing power, it switches off. A few quick button mashes restore gameplay in less than a second.

Such constant, intermittent failures won't please players in 2020, but the Engage isn't a device created for sale. It's a research and development tool, proof that battery-free devices can be interactive and encourage user interaction. Previous devices that didn't need batteries, such as eye-tracking glasses and a cellphone that can make a phone call, are impressive, but they're single-use cases.

"We're really making a huge leap towards useful and usable systems that are built upon this foundation of intermittent computing," says Paweczak. The ultimate goal: Build a device where the time between failure and restoration is so small it's no longer noticeable to the player.

To get there, the team has had to rethink everything it knows about the Game Boy.

The team at TU Delft, L-R: de Winkel, Paweczak, Kortbeek

The Game Boy started a revolution when it debuted in 1989, leading to three decades of dominance in the handheld console market for Nintendo.

By today's standards, the original Game Boy, designed by Nintendo legend Gunpei Yokoi, is primitive and unsightly, but it upholds Nintendo's long-standing ethos: clever, cheap design over technical wizardry.

Packaged in the US with eternally popular tile-matching game Tetris as a launch title, the Game Boy sold 1 million units during its first Christmas and crushed the Atari Lynx and Sega's Game Gear, its technically superior opposition. Where the Lynx and Game Gear zigged, the Game Boy zagged. By focusing on games rather than flashy, energy-hungry graphics, it excelled in one particular realm: battery life.

Hester grew up with a Game Boy in hand. As a child of the '90s, his first experience came with the Game Boy Color, an updated, trimmed-down version of the console released in 1998. He speaks of long family road trips when he'd play "a ton of Tetris" and Godzilla, an obscure puzzle platformer from '91 featuring the Japanese film icon. But not all of his memories are fond ones.

When the world ends, it'll still be around and someone can see what our society was like.

Josiah Hester

Though the battery life for the Game Boy was superior to that of the Lynx or the Game Gear, it never seemed to last the 15 hours it was rated for. Long road trips required players like Hester to carry a packet of spares. "We had a box of AA batteries in the car, just in case," he recalls. He notes the frustration of seeing the Game Boy screen go dim and the music cut out when the batteries died -- an apocalyptic scenario for an 8-year-old on a road trip. Sometimes, all his progress in Godzilla would be lost.

The Engage is designed to combat the inconvenience and impermanence of batteries. Replacing them constantly. Switching them out. Throwing them away. The modern battery isn't just a burden for game consoles, either. All modern devices, from iPhones to smartwatches, are reliant on rechargeable batteries. We replace our phones every year or so, dumping old for new; our classic gaming consoles gather dust in attics and basements while their capacitors degrade and erode.

Hester says part of the mission of the Engage is to realize a world of long-lasting, potentially eternal devices. If some unforeseen apocalypse were to steamroll humanity (something that's felt increasingly likely in this torrid year), and you pulled an Engage from the rubble, it would remain operational. All you'd need to do is take it out in the sun or start mashing the A and B buttons to resurrect it.

"When the world ends, it'll still be around and someone can see what our society was like," Hester jokes.

The birthplace of the battery-free Game Boy, the Engage.

Energy-harvesting techniques aren't yet efficient enough to prevent intermittent failures, presenting a huge problem for any would-be gaming systems: every time the console switches off, a player's progress is lost. To combat this, the team had to engineer a new layer of software for saving games ("checkpointing"), allowing all data to be saved and restored in milliseconds.

"We're basically saving really, really quickly and restoring from our saved game really, really fast without anyone seeing," says Hester.

That's where Vito Kortbeek comes in.

Kortbeek, a Ph.D. student at TU Delft, joined the project to tackle the save-game challenge. Traditional save systems found in cartridges rely on battery power and RAM to keep track of progress. When the batteries die, the checkpoints are gone for good. "If we want to make a checkpoint, we have to shove it somewhere where it's not lost when power is lost," he says.

During play on the Engage, data from the Game Boy emulator is constantly being modified and stored and written into the memory, too, but it's a specialized type of memory that retains its state even after power loss.

But the system is temperamental and dynamic, varying by game. Tetris, for instance, remains powered for longer than Super Mario Land. Kortbeek had to engineer a way to tell the system when to checkpoint regardless of the game, ensuring it would save progress just before power was lost. He also needed to make sure it would come back from power failure as if nothing at all had happened.

His answer was a new checkpointing technique dubbed "MPatch." When the system detects low energy levels, it creates a checkpoint. However, to speed things up, it only stores any data that has been changed from the previous checkpoint as a "patch." These patches are stored sequentially in the system. Before a power failure occurs, a final checkpoint is created.

I hope Johannes Vermeer will forgive me for this.

It sounds complex -- and it is -- but think of the processing like this: You've drawn two copies of the painting Girl with a Pearl Earring stored in different museums. One you don't touch, the other you stick a moustache and some glasses on. Then a huge fire rips through the second museum, but moments before that, you copy just the moustache and glasses.

When you go to restore the second version, you don't paint a brand new Girl with a Pearl Earring, you just copy the surviving painting and stick the moustache and glasses on it. But this restoration happens so fast it's practically imperceptible, as if it happened just after the fire was extinguished. The rapid checkpoint system means that no matter when a power failure occurs, you'll always come back to the exact position you were in. Power failure isn't a disaster, it just puts the machine into hibernation.

"I could start Super Mario [Land] on level one and play through it for a few hours and then I can come back 10 years later and I'm gonna pick up exactly where I was at," explains Hester. And he means, exactly. He notes that you could be midjump in Mario, or a Tetris block could be suspended above a rapidly filling frame.

Overcoming the huge challenges associated with the checkpointing system was a major technological achievement, but there was one hurdle that proved too big to leap.

The battery-free Game Boy can't play sound.

It's a big omission and the system's most glaring limitation. Not hearing Mario's "bwoot" when you hit the A button and jump through the air is jarring. The Tetris theme song, Korobeiniki, is as recognizable as the game itself. Tetris isn't Tetris without Korobeiniki.

"We feel sad about it, but generating sound takes a lot of energy," says Hester.

There are two fundamental problems with generating sound. One: It's a technical challenge to make it sound good enough with the small amount of energy generated by the device. It's possible, de Winkel explains, though it would likely produce a very tinny sound and would be a "whole other endeavour to make it sound right."

But the other problem is, it just doesn't make sense. "Honestly, playing sound would just be annoying as hell," Kortbeek argues. When the device loses power, is it better to start the music from the beginning? Or should the music continue as if it was briefly muted? How would the brain process that and how much would it break immersion?

Godzilla gave Josiah Hester trouble when he was eight.

Hester sees the limitations as a way to rethink video games as a whole. Developers with a battery-free device might specifically create products around the intermittent power failures, he says. The failures, then, would become part of the gameplay, which would open up the ability to play sound without annoyance.

Sound isn't the only limitation, either. The Engage has a much smaller LCD screen to conserve energy when in operation. And while the system is capable of emulating any Game Boy game and can also load the original cartridges, not all games will experience the same performance on the system. The team didn't trial the 1,000-plus titles released for the Game Boy, but some of the biggest titles -- like Pokemon Blue -- have "sadistically huge" memory and don't require constant button pressing. That's a problem.

"You could play it," Hester laughs, "but it's going to be tough."

For now, it's all about optimization. When Hester was beginning his Ph.D. work, the battery-free Game Boy wasn't possible. It couldn't exist. The microcontrollers, the small chips that perform all the computations in the Engage, were almost 50 times slower than they are today. In five years, those microcontrollers have come a long way.

With 30 years between the Game Boy and the Nintendo Switch and the exponential progress being made in intermittent-computing techniques, Hester's confident that energy-harvesting devices will power games as complex as those we see today. "I would love to have Breath of the Wild on my Switch with an energy harvester," he says.

Hester's scientific endeavors have long been informed by his upbringing as Kanaka Maoli, a Native Hawaiian. He's always been aware of the clear connection between family and the Earth that characterizes their relationship with the land.

Josiah Hester says we need radical new approaches to tackle climate change.

"The land is called the 'Aina and it's not just a resource to be used," he says. "Plants and animals are talked about as brothers and sisters."

Those beliefs drive Hester, but his collaborators in the Netherlands are driven by a sense of duty to combat climate change. Paweczak notes how sustainability and the environment is a particularly important issue because one-quarter of the country lies below sea level. During our Zoom conversation, de Winkel chimes in, laughing, mentioning how the country's dikes prevent his home from being swallowed by the sea.

The environmental impact of video games is something developers, publishers, manufacturers and consumers are beginning to come to terms with. The next generation of home consoles -- the Xbox Series X and PS5 -- are being touted as the most powerful and fastest ever. Looking under the hood, it's reasonable to assume these next-gen consoles might chug as at least as much energy as their predecessors when they're released at the end of 2020.

Obviously I care about my children growing up in a place that's not burning hot.

Josiah Hester

Outside of raw energy concerns, the batteries powering our smart devices and gaming consoles require the element lithium. The process of mining lithium uses hundreds of thousands of gallons of water and has had a big impact in some of the driest places on Earth, like Chile. Farmers in the region, who rely on water for agriculture and livestock, are losing access to the supply.

"Obviously I care about my children growing up in a place that's not burning hot," Hester says, "and being able to experience a lot of ancient Hawaiian traditions that will disappear because of climate change."

Is this the beginning of the future for Super Mario Land?

The Engage serves as a starting point to inspire the industry and consumers to think about the impacts of battery use. The design, hardware and firmware are all open-source and will be available on GitHub for anyone to use after Sept. 12. A short technical write-up will be available at FreeTheGameBoy.info.

Hester hopes the team's Game Boy overhaul can inspire a conversation about products using alternative energy sources and highlight their benefits to the environment. "We kind of need radical, crazy approaches," says Hester. "One of the radical things we could do is completely rethink how we build these devices by throwing the batteries away."

But Engage is, in its current form, a part of the problem. It requires 3D-printed plastics and its circuitry is dependent on rare earth elements, too. While there are no plans to mass-produce the product (and a tetchy Nintendo would likely never allow that kind of IP infringement), there's clearly a lot of work to be done to more effectively green handheld gaming.

Eventually every component of battery-free systems, including video game consoles, should be recyclable and reusable, Paweczak says. "We feel that this is the first major step towards it, because the battery seems to be the biggest polluter," he says.

"I hope this Game Boy will be enough to draw people's interest, such that they'll maybe make changes or, at least, think about how they could approach [climate change] in a radical way," says Hester.

As we talk on Zoom, Hester's young daughter, Leina'ala, hovers at the edge of the frame, a buzzing bundle of energy calling out to her dad. After a polite exchange, Hester convinces her to head back downstairs. She bounds away, shouting an adorable "I wuv you" as she disappears. I joke that she'll be playing battery-free Tetris by the time she's Hester's age.

"3D Tetris," he replies. "By that time, our energy harvesting will be so pristine, you won't even need a plug on your Switch.

"That's the goal."

Original post:
The first battery-free Game Boy wants to power a gaming revolution - CNET