Artificial intelligence spotted inventing its own creepy language – New York Post

An artificial intelligence program has developed its own language and no one can understand it.

OpenAI is anartificial intelligencesystems developer their programs are fantastic examples of super-computing but there are quirks.

DALLE-E2 isOpenAIs latest AI system it can generate realistic or artistic images from user-entered text descriptions.

DALLE-E2 represents a milestone in machine learning OpenAIs site says the program learned the relationship between images and the text used to describe them.

A DALLE-E2 demonstration includesinteractive keywordsfor visiting users to play with and generate images toggling different keywords will result in different images, styles, and subjects.

But the system has one strange behavior itswritingits own language of random arrangements of letters, and researchers dont know why.

Giannis Daras, a computer science Ph.D. student at the University of Texas, published aTwitter threaddetailing DALLE-E2s unexplained new language.

Daras told DALLE-E2 to create an image of farmers talking about vegetables and the program did so, but the farmers speech read vicootes some unknown AI word.

Darasfedvicootes back into the DALLE-E2 system and got back pictures of vegetables.

We then feed the words: Apoploe vesrreaitars and we get birds. Daras wrote on Twitter.

It seems that the farmers are talking about birds, messing with their vegetables!

Stay up on the very latest with Evening Update.

Daras and a co-author have written apaperon DALLE-E2s hidden vocabulary.

They acknowledge that telling DALLE-E2 to generate images of words the command an image of the word airplane is Daras example normally results in DALLE-E2 spitting out gibberish text.

When plugged back into DALLE-E2, that gibberish text will result in images of airplanes which says something about the way DALLE-E2 talks to and thinks of itself.

Some AI researchers argued that DALLE-E2s gibberish text is random noise.

Hopefully, we dont come to find the DALLE-E2s second language was a security flaw that needed patching after its too late.

This article originally appeared onThe Sunand was reproduced here with permission.

See original here:
Artificial intelligence spotted inventing its own creepy language - New York Post

Netradyne Named to Forbes AI 50 List of Top Artificial Intelligence Companies of 2022 – PR Newswire

Netradyne Uses AI To Help Fleets Reduce Driver Incidents, Protect Against False Claims, and Create Safer Roads

SAN DIEGO, June 8, 2022 /PRNewswire/ --Netradyne, an industry leader in artificial intelligence (AI) and edge computing focusing on driver and fleet safety, has been named on this year's Forbes AI 50 list 2022 for North America. Produced in partnership with Sequoia Capital, this list recognizes the standout privately held companies in North America that are making the most interesting and impactful uses of AI.

Forbes editorial team acknowledged that AI technology is driving advancements in every industry but that it can be difficult to identify which companies are utilizing such technology in transformative and measurable ways. The Forbes AI 50 list, now in its fourth edition, identifies North America's privately held companies at the forefront of the field for whom AI is at the heart of their products and services.

In selecting honorees for this year's list, Forbes' 12-judge panel of experts in artificial intelligence from the fields of academia, technology, and venture capital evaluated hundreds of submissions, handpicking the top 50 most compelling companies.

"We are honored to be named to the Forbes AI 50 list," said Avneesh Agrawal, co-founder, and CEO of Netradyne. "At Netradyne, our mission is to create safer and smarter roadways for all. Using AI and edge computing technologies, we are revolutionizing the fleet transportation ecosystem by helping reinforce good driving behavior and similarly empowering drivers to improve their performance."

Agrawal continued, "Driveri's unique ability to analyze everymile of a journey allows insights into good driving behaviors, which can be recognized and rewarded to reinforce drivers' safe behavior, and drivers also have full transparency and coaching access to their personalized driving GreenZone score via the driver mobile app."

Netradyne provides fleets of all sizes and vehicle types with an advanced video safety camera, fleet performance analytics tracking, and driver awareness tools to help reduce risky driving behavior and reward safe driving decision-making. Driveriis the only solution that can positively recognize, empower and improve driver performance. The cascading effects are powerful by using Driveri's revolutionary AI and reinforcing good behavior to improve driver performance in real-time. Fleets see reduced accidents, higher safety scores, lower insurance costs, improved driver retention, and better fleet performance in increased profits.

Netradyne was one of the hundreds of applicants to be included in this prestigious list. A panel of 12 expert AI judges identified the 50 most compelling companies.

About Netradyne, Inc.

Netradyne harnesses the power of Computer Vision and Edge Computing to revolutionize the modern-day transportation ecosystem. Netradyne is an industry leader in fleet safety solutions, immediately improving driver behavior and fleet performance and setting commercial vehicle driving standards. Netradyne collects and analyzes more data points and meaningful information than any other fleet safety organization so customers can improve retention, increase profitability, enhance safety, and enable end-to-end transparency. Organizations trust Netradyne to build a positive, safe, and driver-focused culture to take their business to the next level.

CONTACT: [emailprotected]

SOURCE Netradyne

See the original post here:
Netradyne Named to Forbes AI 50 List of Top Artificial Intelligence Companies of 2022 - PR Newswire

This Artificial Intelligence Stock Has a $596 Billion Opportunity – The Motley Fool

No technology has ever had the potential to transform the way the world does business quite like artificial intelligence (AI). Even in its early stages, it's already proving its ability to complete complex tasks in a fraction of the time that humans can, with adoption in both large organizations and small-scale start-ups accelerating.

C3.ai (AI -6.27%) is the world's first enterprise AI provider. It sells ready-made and customized applications to companies that want to leverage the power of this advanced technology without having to build it from scratch, and its customer base continues to grow in both number and pedigree.

C3.ai just reported its full-year results for fiscal 2022 (ended April 30), and beyond its strong financial growth, the company also revealed the magnitude of its future opportunity.

Image source: Getty Images.

Sentiment among both investors and the general public continues to trend against fossil fuel companies as people become more conscious about humanity's impact on the environment. Oil and gas companies are constantly trying to improve their processes to produce cleaner energy, and artificial intelligence is now helping them do that.

C3.ai serves companies in 11 industries, but 54% of its revenue comes from the fossil fuel sector. The company has a long-standing partnership with oil and gas services giant Baker Hughes (BKR -3.76%). Together, they've developed a full suite of applications designed to enable the industry to predict catastrophic equipment failures and to help reduce carbon emissions. Shell (SHEL -2.87%), for example, uses C3.ai's software to monitor 10,692 pieces of equipment every single day, ingesting data from over 1.1 million sensors to make 515 million predictions each month.

C3.ai continues to report major customer wins. It just received its first two orders as part of a five-year, $500 million deal with the U.S. Department of Defense, which was signed last quarter. And its collaborations with the world's largest cloud services providers, like Alphabet's Google Cloud, have delivered further blockbuster signings like Tyson Foodsand United Parcel Service. C3.ai and Google Cloud are leaning on each other's expertise to make advanced AI tools more accessible for a growing list of industries.

Overall, by the numbers, C3.ai's customer count is proof of steady demand.

C3.ai reported revenue of $72.3 million in the fourth quarter, which was a 38% year-over-year jump. For the full year, it met its previous guidance and delivered $252.8 million, which was also 38% higher compared to 2021.

But the company's remaining performance obligations (RPO) will likely capture the attention of investors because they increased by a whopping 62% to $477 million. It's an important number to track because it's effectively a looking glass into the future, as C3.ai expects RPO will eventually convert into revenue.

C3.ai isn't a profitable company just yet. It made a net loss of $192 million for its 2022 full year, a sizable jump from the $55 million it lost in 2021, mainly because it more than doubled its investment in research and development and increased its sales and marketing expenditure by nearly $80 million.

But since the company maintains a gross profit margin of around 75%, it has the flexibility to rein in its operating costs in the future to improve its bottom-line results. C3.ai is deliberately choosing to sacrifice profitability to invest heavily in growth because it's chasing an addressable market it believes will be worth $596 billion by 2025.

C3.ai maintains an extremely strong financial position, with over $950 million in cash and equivalents on its balance sheet. That means the company could operate at its 2022 loss rate of $192 million for the next five years before it runs out of cash, leaving plenty of time to add growth before working toward profitability.

Unfortunately, the current market environment has been unfavorable to loss-making technology companies. The Nasdaq 100 tech index currently trades in a bear market, having declined by 25% from its all-time high. It's representative of dampened sentiment thanks to rising interest rates and geopolitical tensions, which have forced investors to reprice their growth expectations.

C3.ai stock was having difficulties prior to this period, as growth hasn't been quite as strong as some early backers expected. Overall, the stock price has fallen by 87% since logging its all-time high of $161 per share shortly after its initial public offering in December 2020.

But that might be a great opportunity for investors with a long-term time horizon. C3.ai has some of the world's largest companies on its customer list, it's running a healthy gross profit margin, and it's staring at a $596 billion opportunity in one of the most exciting areas of the technology sector right now.

Read the original:
This Artificial Intelligence Stock Has a $596 Billion Opportunity - The Motley Fool

Expert.ai and the University of Siena Launch the First Multilingual Crossword Solver Based on Artificial Intelligence – PR Newswire

Expert.ai to Livestream "WebCrow" on June 16th; Stage Set for Multilingual Showdown Against Human Experts

BOSTON, June 9, 2022 /PRNewswire/ --Starting today, even machines can solve crossword puzzles thanks to WebCrow 2.0, a software developed by the University of Siena in collaboration with expert.ai (EXAI:IM), a leading company in artificial intelligence (AI) for natural language processing (NLP) and understanding (NLU).

For over a century, crossword puzzles have been an intriguing challenge for humans because of the complexity and nuance of the human language. This also happens to be one of the most complex and challenging areas for AI. In fact, the most advanced linguistic technologies must possess a significant breadth and depth of knowledge to identify the correct meaning of words based on context (e.g., trim a tree vs. trim on a house). They must also be able to interpret slang, catch phrases, wordplay and other forms of ambiguity (e.g., a crossword clue: liquid that does not stick, answer: scotch). WebCrow 2.0 does this and more.

"We're excited to introduce our intelligent machine, WebCrow, and discuss its evolution and ability to create and solve a daily standard of life, the crossword puzzle," said Marco Gori, Professor, Department of Information Engineering and Mathematical Sciences, University of Siena. "Can machines solve these as well as humans? How do they compare definitions and answer clues with niche or abstract references? Can they pick up on plays on words, linguistic nuances and even humor? We're ready to demonstrate how leveraging context can enable humans and software to work together and take AI-based cognitive abilities to new levels."

Understanding, Knowledge graph, Reasoning

WebCrow 2.0 has been empowered with typical human skills to simulate human-like processes for reading, understanding, and reasoning. This allows the software to identify the meaning of words based on definitions and other clues in crossword puzzles. It accomplishes this by:

"It's our business to help organizations improve any activity or process based on understanding and managing the immense wealth of information at their disposal," said Marco Varone, CTO of expert.ai. "It was very gratifying to work with researchers from the University of Siena and support their efforts with our tools for disambiguation, knowledge graph and expertise in applying AI to language. Anyone who has been challenged by a crossword is familiar with nuanced clues, so automated puzzle solving is a great way to illustrate just how far we've come in advancing natural language technologies."

Livestream: Solving Crosswords with WebCrow AI

A movie about WebCrow and its crossword-solving abilities can be viewed on the website. A special LinkedIn NLP stream session, "Solving Crossword Puzzles with WebCrow AI," is scheduled for June 16 at 11:00 am EDT. Those interested in attending can register here.

The Next Challenge for WebCrow

Next up for WebCrow is to compete against human experts in a multilingual competition. The "WebCrow 2.0 - Human vs. Machine" challenge is organized by expert.ai and the University of Siena, in collaboration with SudokuEditori (unpublished crosswords for the Italian language) and AVCX "Crosswords for the (not) faint of heart" (unpublished crosswords in English).

For more information, visit Expert.ai and the University of Siena Launch the First Multilingual Crossword Solver Based on Artificial Intelligence - Expert.ai | Expert.ai

About expert.ai

Expert.ai (EXAI:IM) is a leading company in AI-based natural language software. Organizations in insurance, banking and finance, publishing, media and defense all rely on expert.ai to turn language into data, analyze and understand complex documents, accelerate intelligent process automation and improve decision making. Expert.ai's purpose-built natural language platform pairs simple and powerful tools with a proven hybrid AI approach that combines symbolic and machine learning to solve real-world problems and enhance business operations at speed and scale. With offices in Europe and North America, expert.ai serves global businesses such as AXA XL, Zurich Insurance Group, Generali, The Associated Press, Bloomberg INDG, BNP Paribas, Rabobank, Gannett and EBSCO. For more information, visit https://www.expert.ai

SOURCE expert.ai

Continued here:
Expert.ai and the University of Siena Launch the First Multilingual Crossword Solver Based on Artificial Intelligence - PR Newswire

Timnit Gebru and the fight to make artificial intelligence work for Africa – Mail and Guardian

The way Timnit Gebru sees it, the foundations of the future are being built now. In Silicon Valley, home to the worlds biggest tech companies, the artificial intelligence (AI) revolution is already well under way. Software is being written and algorithms are being trained that will determine the shape of our lives for decades or even centuries to come. If the tech billionaires get their way, the world will run on artificial intelligence.

Cars will drive themselves and computers will diagnose and cure diseases. Art, music and movies will be automatically generated. Judges will be replaced by software that supposedly applies the law without bias and industrial production lines will be fully automated and exponentially more efficient.

Decisions on who gets home loans or how much your insurance premiums will be made by an algorithm that assesses your creditworthiness, while a similar algorithm will sift through job applications before any CVs get to a human recruiter (this is already happening in many industries). Even news stories, like this one, will be written by a program that can do it faster and more accurately than human journalists.

But what if those algorithms are racist, exclusionary or have dangerous implications that were not anticipated by the mostly rich, white men who created them? What if, instead of making the world better, they just reinforce the inequalities and injustices of the present? Thats what Gebru is worried about.

Were really seeing it happening. Its scary. Its reinforcing so many things that are harming Africa, says Gebru.

She would know. Gebru was, until late 2020, the co-director of Googles Ethical AI program. Like all the big tech companies, Google is putting enormous resources into developing its artificial intelligence capabilities and figuring out how to apply them in the real world. This encompasses everything from self-driving cars to automatic translation and facial recognition programs.

The ultimate prize is a concept known as Artificial General Intelligence a computer that is capable of understanding the world as well as any human and making decisions accordingly

It sounds like a god, says Gebru.

She was not at Google for long. Gebru joined in 2018, and it was her job to examine how all this new technology could go wrong. But input from the ethics department was rarely welcomed.

It was just screaming about issues and getting retaliated against, she says. The final straw was when she co-authored a paper on the ethical dangers of large language models, used for machine translation and autocomplete, which her bosses told her to retract.

In December 2020, Gebru left the company. She says she was fired; Google says she resigned. Either way, her abrupt departure and the circumstances behind it thrust her into the limelight, making her the most prominent voice in the small but growing movement that is trying to force a reckoning with Big Tech before it is too late to prevent the injustices of the present being replicated in the future.

Gebru is one of the worlds leading researchers helping us understand the limits of artificial intelligence in products like facial-recognition software, which fails to recognise women of colour, especially black women, wrote Time magazine when it nominated Gebru as one of the 100 most influential people in the world in 2022.

She offers us hope for justice-oriented technology design, which we need now more than ever.

Artificial intelligence is not yet as intelligent as it sounds. We are not at the stage where a computer can think for itself or match a human brain in cognitive ability. But what computers can do is process incomprehensibly vast amounts of data and then use that data to respond to a query. Take Dall-E 2, the image-generation software that created The Continents cover illustration this week, developed by San Francisco-based OpenAI.

It can take a prompt such as a brain riding a rocket ship heading towards the moon and turn it into an image with uncannily accurate sometimes eerie results. But the software is not thinking for itself. It has been trained on data, in this case, 650 million existing images, each of which have a text caption telling the computer what is going on in the picture. This means it can recognise objects and artistic styles and regurgitate them on command. Without this data, there is no artificial intelligence.

Like coal shovelled into a steamships furnace, data is the raw material that fuels the AI machine. Gebru argues that all too often the fuel is dirty. Perhaps the data is scraped from the internet, which means it is flawed in all the ways the internet itself is flawed Anglo- and Western-centric, prone to extremes of opinion and political polarisation and all too often it reinforces stereotypes and prejudices. Dall-E 2, for instance, thinks that a CEO must be a white man, while nurses and flight attendants are all women.

More ominous still was an algorithm developed for the United States prison system, which predicted that black prisoners were more likely than white people to commit another crime, which led to black people spending longer in jail.

Or perhaps, in one of the great paradoxes of the field, the data is mined through old-fashioned manual labour thousands of people hunched over computer screens, painstakingly sorting and labelling images and videos. Most of this work has been outsourced to the developing world and the people doing the work certainly arent receiving Silicon Valley salaries.

Where do you think this huge workforce is? There are people in refugee camps in Kenya, in Venezuela, in Colombia, that dont have any sort of agency, says Gebru.

These workers are generating the raw material but the final product and the enormous profits that are likely to come with it will be made for and in the West. What does this sound like to you? Gebru asks.

Timnit Gebru grew up in Addis Ababa (Timnit means wish in Tigrinya). She was 15 when Ethiopia went to war with Eritrea, forcing her into exile, first in Ireland and then in the US, where she first experienced casual racism. A temp agency boss told her mother to get a job as a security guard, because who knows whatever degree you got from Africa. A teacher refused to place Gebru in an advanced class because people like you always fail.

But Gebru didnt fail. Her academic record got her into Stanford, one of the worlds most prestigious universities, where she hung out with her friends in the African Students Association and studied electrical engineering. It was here that both her technical ability and her political consciousness grew.

She worked at Apple for a stint, and then returned to the university where she developed a growing fascination with artificial intelligence. So then I started going to these conferences in AI or machine learning, and I noticed that there were almost no black people. These conferences would have 5 000 or 6 000 people from all over the world but one or two black people.

Gebru co-founded Black in AI for black professionals in the industry to come together and figure out ways to increase representation. By that stage, her research had already proved how this racial inequality was being replicated in the digital world. A landmark paper she co-authored with the Ghanaian-American-Canadian computer scientist, Joy Buolamwini, found that facial recognition software is less accurate at identifying women and people of colour a big problem if law enforcement is using this software to identify suspects.

Gebru got her job at Google a couple of years later. It was a chance to fix what was broken from inside one of the biggest tech companies in the world. But, according to Gebru, the company did not want to hear about the environmental costs of processing vast data sets, or the baked in biases that come with them, or the exploitation of workers in the Global South. It was too busy focusing on all the good it was going to do in the distant future to worry about the harm it might cause in the present.

This, she says, is part of a pernicious philosophy known as long-termism, which holds that lives in the future are worth just as much as lives in the present. Its taken a really big hold in Silicon Valley, Gebru says. This philosophy is used by tech companies and engineers to justify decisions in product design and software development that do not prioritise immediate crises such as poverty, racism and climate change or take other parts of the world into consideration.

Abeba Birhane, a senior fellow in Trustworthy AI at the Mozilla Foundation, says: The way things are happening right now is predicated on the exploitation of people on the African continent. That model has to change. Not only is long-termism taking up so much of the AI narrative, it is something that is preoccupied with first-world problems.

Its taking up a lot of air, attention, funding, from the kind of work Timnit is doing, the groundwork that specialist scholars of colour are doing on auditing data sets, auditing algorithms, exposing biases and toxic data sets.

In the wake of Gebrus departure from Google some 2 000 employees signed a petition protesting against her dismissal. Although not acknowledging any culpability, Sundar Pichai the chief executive of Alphabet, Googles parent company said: We need to assess the circumstances that led to Dr Gebrus departure, examining where we could have improved and led a more respectful process. We will begin a review of what happened to identify all the points where we can learn.

In November 2020, a civil war broke out in Ethiopia and once again Gebrus personal and professional worlds collided. As an Ethiopian, she has been vocal in raising the alarm about atrocities being committed, including running a fundraiser for victims of the conflict. As a computer scientist, she has watched in despair as artificial intelligence has enabled and exacerbated these atrocities.

On Facebook, hate speech and incitements to violence related to the Ethiopian conflict have spread with deadly consequences, with the companys algorithms and content moderators entirely unable or unwilling to stop it. For example, an investigation by The Continent last year, based on a trove of leaked Facebook documents, showed how the social media giants integrity team flagged a network of problematic accounts calling for a massacre in a specific village. But no action was taken against the accounts. Shortly afterwards, a massacre took place.

The tide of the war was turned when the Ethiopian government procured combat drones powered by artificial intelligence. The drones targeted the rebel Tigray forces with devastating efficacy and have been implicated in targeting civilians too, including in the small town of Dedebit, where 59 people were killed when a drone attacked a camp for internally displaced people.

Thats why all of us need to be concerned about AI, says Gebru. It is used to consolidate power for the powerful. A lot of people talk about AI for the social good. But to me, when you think of the current way it is developed, it is always used for warfare. Its being used in a lot of different ways by law enforcement, by governments to spy on their citizens,by governments to be at war with their citizens, and by corporations to maximise profit.

Once again, Gebru is doing something about it. Earlier this year, she launched the Distributed Artificial Intelligence Research Institute (Dair). The clue that Dair operates a little differently is in the word distributed.

Instead of setting up in Silicon Valley, Dairs staff and fellows will be distributed all around the world, rooted in the places they are researching.

How do we ring the alarm about the bad things that we see, and how can we develop this research in a way that benefits our community? Raesetje Sefala, Dairs Johannesburg-based research fellow, puts it like this: At the moment, it is people in the Global North making decisions that will affect the Global South.

As she explains it, Dairs mission is to convince Silicon Valley to take its ethical responsibilities more seriously but also to persuade leaders in the Global South to make better decisions and to implement proper regulatory frameworks. For instance, Gmail passively scans all emails in Africa for the purposes of targeted advertising, but the European Union has outlawed this to protect their citizens.

Our governments need to ask better questions, says Sefala. If it is about AI for Johannesburg, they should be talking to the researchers here.

So far, Dairs team is small just seven people in four countries. So, too, is the budget.

What were up against is so huge, the resources, the money that is being spent, the unity with which they just charge ahead. Its daunting sometimes if you think about it too much, so I try not to, says Gebru.

And yet, as Gebrus Time magazine nod underscored, sometimes it is less about the money and more about the strength of the argument. On that score, Gebru and Dair are well ahead of Big Tech and their not quite all-powerful algorithms.

This article first appeared in The Continent, the pan-African weekly newspaper produced in partnership with the Mail & Guardian. Its designed to be read and shared on WhatsApp. Download your free copy here.

More here:
Timnit Gebru and the fight to make artificial intelligence work for Africa - Mail and Guardian

Artificial Intelligence and Sexual Wellness: The Future is Looking (And Feeling) Good – Gizmodo Australia

What does artificial intelligence have to do with sex? No, its not a set up for a dirty joke. Its actually a question we recently asked the man in charge of tech at the worlds largest sexual wellness company.

When you think of technology and innovation while talking about sexual wellness devices (the term we prefer to use for sex toys), its likely you think of the speeds of a vibrator, or an app that controls something you use in the bedroom. But it goes much deeper than that. And the possibilities of where it can go in the future, thanks to tech such as artificial intelligence (AI), are as mind-blowing as an orgasm (at least for tech nerds like us).

The Lovehoney Group is on a mission to promote sexual happiness and empowerment through design, innovation and research and development. And after chatting with The Lovehoney Groups chief engineering and production officer Tobias Zegenhagen, its easy to see just how much tech is actually involved in the sexual wellness industry.

But what if it could go one step further? What if a device just knew what felt good? Enter AI.

Currently, the user or their partner is the one controlling certain buttons, either on the device or a remote control. But, what if the device could be the one controlling the device?

Algorithms, AI sensing your responses, then using that data in order to intelligently drive the toy the way you want it, Zegenhagen described of a future that isnt all that far away. An AI controlling a toy based on your movements, reactions and learning from the previous data its pulled from you.

You are getting information and you use that information intelligently in order to fulfil a user need.

Its pretty straight forward when its broken down like that.

Lovehoney Group has a product in the market already, the We-Vibe Chorus, which allows you to, via an app, share vibrations during sex. Chorus matches its vibration intensity to the strength of your grip, with the idea being that its completely in tune with you. The Chorus has a capacitive sensor in it that senses the act of sexual intercourse. During PIV sex, it senses the touching of the two bodies, and according to these touches, it controlls the toy.

It is a straightforward algorithm, Zegenhagen said.

It actually makes a lot of sense. If you think about each of the sexual partners youve had throughout your life, no ones body is the same.

How you move is individual and changes all the time from person to person, from day to day, Zegenhagen said, adding what you want during sex is also individual.

Controlling the toy in general, and then individualising it to the person. That is where I see AI coming in.

Theres an immense amount of promise. But its important Lovehoney Group (and their peers, of course) use technology for the right purpose. That is, not just using tech like AI for the sake of it, that it offers something of benefit to the sexual experience. And, that data privacy is front and centre.

It is definitely in our core to try to innovate, and we need to research in order to better understand user needs, and to use technology in order to advance and to innovate, Zegenhagen explained. But it isnt that straight forward. Theres an insane amount of people at Lovehoney Group in the R&D (research and development) space.

If you compare it with other technological fields or areas, what is real particular in this case, is that the requirements that you formulate are very blurry and very individual, he said. If you ask somebody, What does sexual fulfillment mean for you?, What is a perfect orgasm?, you could ask a hundred people and you get 500 answers.

Unlike with, say, a phone, when it comes to sexual wellness, its very difficult for a user to state the actual need. But as Zegenhagen explained, it is also very difficult to then verify that the need is actually being fulfilled by the technology. Thats without even taking into consideration any biological and neurological factors.

We have a rough understanding of how touch works and how we perceive stimulation, Zegenhagen said. But do we know all the mechanisms behind it? Absolutely not. What happens when I touch a rough surface with my hand? How do my mechanical receptors perceive that? How is that being transferred to the brain? All this is pretty much unclear.

While a sexual wellness device isnt the same as medication, the closest comparison is probably with developing a new drug. You answer a need, test it, tweak it, test on a broader audience but everyones response to that medication will be different.

The human being is too complex to fully understand, he added.

I think that the easiest technical solution to meet a user need is the best technical solution, not the most complex one.

You dont have to be technically complex to be innovative. You dont have to be technically complex to meet a user need it has to be as simple as possible.

Well, yes, thats true. It would definitely kill the mood if you had to read a 30-page user manual or learn something needed to be charged, paired, updated, etc the moment youre about to use it.

There is a huge playground for technology in our field, Zegenhagen said.

With AI offering all sorts of benefits to our sexual wellness, the future sure is looking (and feeling) good.

Read more:
Artificial Intelligence and Sexual Wellness: The Future is Looking (And Feeling) Good - Gizmodo Australia

Dress Codes | The First Amendment Encyclopedia

Dress codes are typically implemented by school districts and employers to promote learning, safety, and image.Although such regulations face First Amendment challenges by students, parents, and employees, the courts generally support the schools and employers. In this 2013 photo, Mary Beth Tinker, 61, shows an old photograph of her with her brother John Tinker to the Associated Press during an interview in Washington. InTinker. v. Des Moines Independent Community School District(1969) the Court affirmed students First Amendment rights to free speech.Although the Courts decision upheld students right to express themselves through certain items they wear, the Court has never specified whether that right bars uniforms, dress codes, or grooming requirements.(AP Photo by Manuel Balce Ceneta, used with permission from the Associated Press)

Dress codes are typically implemented by school districts and employers to promote learning, safety, and image.Although such regulations face First Amendment challenges by students, parents, and employees, the courts generally support the schools and employers.

School dress codes that merely exclude types of clothing, such as gang colors or provocative attire, tend to be enacted without controversy. When codes require uniform-like attire, however, many parents and children object.

The Supreme Court has never directly addressed school dress codes. In Tinker v. Des Moines Independent Community School District (1969), which involved high school students wearing black armbands to protest the Vietnam War, the Court affirmed students First Amendment rights to free speech. Although the Courts decision upheld students right to express themselves through certain items they wear, the Court has never specified whether that right bars uniforms, dress codes, or grooming requirements.

Faced with increasing student-discipline problems, particularly from gang violence (involving gangs whose members often identified themselves through items of clothing) and a rise in more prurient clothing in the 1980s and 1990s, school systems in the 1990s began to introduce dress codes, school uniforms, and uniform-like dress codes.

In two State of the Union addresses, President Bill Clinton advocated public school uniforms, similar to those in parochial schools and many public schools overseas. The number of schools that adopted uniforms is not known, but in California, where they were first mandated, at least 50 schools abandoned their uniform requirements between 2000 and 2002.

Short of restricting pure political expression that does not disrupt learning, school officials have much constitutional latitude.The law in this area is far from settled, and the courts frequently side with the schools when dress requirements are challenged by students and parents.

In practice, however, the bitterness and the cost of litigation have reduced the practical maneuvers of school administrators and school boards.

If school officials attempt to punish students who exercise their expressive rights by wearing buttons, writing on fingernails, or protest messages on shirts, they could find themselves slapped with protected-speech or petition action lawsuits. In addition, in districts that have imposed incentives to increase participation in voluntary uniform and uniform-like dress codes, threats of or actual lawsuits have quickly emerged to halt this allegedly coercive practice.

Opponents of dress codes and uniforms are often fall into a few categories:

Similarly, the motives of advocates of mandatory uniforms or uniform-like dress codes vary from those who want to de-emphasize clothing and promote the egalitarianism implicit in similar clothing to those who primarily wish to avoid fights with their children over what to wear.

School administrators and teachers are divided on the issue. Some, particularly those in underperforming or less disciplined school environments, welcome uniforms and uniform-like dress codes. Supporters also argue that uniforms help identify intruders on school property.

Opponents contend, however, that uniforms also make it more difficult to identify distressed students, who may reveal symptoms of psychological disorders by wearing unusual clothing. They also point out that teachers often waste the first minutes of class trying to determine which of their students who are not in uniform have waivers and which are violating the code.

In addition, friction and discipline problems may worsen as rule breakers crowd the principals office. Over time, students may simply stop wearing the uniform or uniform-like dress, or they may mock the policy by wearing the uniform in a revealing way.

Scholars have studied the effects of uniforms and dress codes on discipline and academic performance, but their findings have been mixed: Researchers, including sociologist David L. Brunsma at the University of Alabama at Huntsville, have concluded that no relationship exists, that the uniform or dress code is much less important than most other factors, or even that uniforms lower test scores.

Employers are entitled to enact dress codes, including uniforms, if there is a rational basis for the requirement, such as fostering a particular business image, encouraging harder work, or complying with public safety and health standards. They can ban anything reasonably deemed to be distracting from work, including body art.

Employers may also offer alternative dress codes, such as minimum requirements for casual Fridays. Although employees do not have a First Amendment right to dress in any way they choose to express themselves, they do have rights under the First Amendment to contest a dress code in a civil manner without fear of employer retribution.

The courts generally defer to employer judgments and have thus upheld prohibitions of torn clothing, sweat pants, short skirts or blouses, and hats.

Provided that the dress code is written clearly, is not excessive or onerous, is applied in a consistent fashion, and does not obviously discriminate on the basis of race, sex, religion, and perhaps ethnicity, the code is constitutionaland does not violate Title VII of the Civil Rights Act of 1965.

A dress code that discriminated on the basis of gender would be struck down. However, dress codes that are consistent with social customs can be upheld. Thus, in Harper v. Blockbuster Entertainment (11th Cir., 1998), the 11th U.S. Circuit Court of Appeals upheld a rule requiring shorter haircuts for male employees.

This article was originally published in 2009. Henry F. Carey is Associate Professor of Political Science at Georgia State University.

Link:

Dress Codes | The First Amendment Encyclopedia

First Amendment audits – Wikipedia

Largely American social movement

First Amendment audits are a largely American social movement that usually involves photographing or filming from a public space. It is often categorized by its practitioners, known as auditors, as activism and citizen journalism that tests constitutional rights;[1] in particular the right to photograph and video record in a public space.[2][3] Auditors believe that the movement promotes transparency and open government.[4] However, critics argue that audits are often confrontational in nature, as auditors often refuse to self-identify or explain their activities.[5][6] Some auditors have also been known to enter public buildings asserting that they have a legal right to openly carry firearms, leading to accusations that auditors are engaged in intimidation, terrorism, and the sovereign citizen movement.[7][8][9]

Auditors tend to film or photograph government buildings, equipment, access control points and sensitive areas, as well as recording law enforcement or military personnel present.[10] Auditors have been detained, arrested, assaulted, had camera equipment confiscated, weapons aimed at them, had their homes raided by a SWAT team, and been shot for video recording in a public place.[11][12][13][14][15][16] Such events have prompted police officials to release information on the proper methods of handling such an activity.[17][18] For example, a document sponsored by the International Association of Chiefs of Police states that the use of a recording device alone is not grounds for arrest, unless other laws are violated.[19]

The practice is predominantly an American concept, but it has also been seen in other countries including the United Kingdom,[20][21] Canada, and India.[citation needed]

Auditors typically travel to a place that is considered public property, such as a sidewalk or public right-of-way, or a place open to the public, such as a post office or government building, and visibly and openly photograph and record buildings and persons in their view.[22]

In the case of sidewalk or easement audits, the conflict arises when a property owner or manager states, in substance, that photography of their property is not allowed. Sometimes, auditors will tell property owners upon questioning that they are photographing or recording for a story, they are photographing or recording for their "personal use", or sometimes auditors do not answer questions.[23][24] Frequently, local law enforcement is called and the auditor is sometimes reported as a suspicious person and are often also identified as having been on private property. Some officers will approach the auditors and request his or her identification and an explanation of their conduct. Almost universally, auditors will invoke the 4th Amendment with the belief that they are not required to identify themselves unless witnessed having just committed a crime. They quote the relevant law to the officer as the basis for their refusal to identify.[6][25] This sometimes results in officers arresting auditors for failing to identify themselves, obstruction of justice, disorderly conduct, or any potential or perceived crime that could potentially be justified by the occasion.[26][27]

The legality of recording in public was first clearly established in the United States following the case of Glik v. Cunniffe,[28] which confirmed that restricting a person's right to film in public would violate their First and Fourth amendment rights. As the 7th Circuit Federal Court of Appeals explained in ACLU v. Alvarez, "[t]he act of making an audio or audiovisual recording is necessarily included within the First Amendments guarantee of speech and press rights as a corollary of the right to disseminate the resulting recording. The right to publish or broadcast an audio or audiovisual recording would be insecure, or largely ineffective, if the antecedent act of making the recording is wholly unprotected."[29][30] However, the legality of the auditors' actions beyond mere filming are frequently subject to debate. As long as the auditor remains in a public place where they are legally allowed to be, they have the right to record anything in plain view, subject to very limited time, place, and manner restrictions.[31][32]

Some auditors occasionally yell insults, derogatory language, and vulgarities at police officers who attempt to stop them from recording or improperly demand identification.[10] Police will sometimes charge auditors with disorderly conduct when they engage in behavior that could be considered unlawful. For example, an auditor in San Antonio was prosecuted and convicted of disorderly conduct after an audit.[33] After the trial, the Chief of Police for the City of San Antonio stated "[the verdict] puts a dagger in the heart of their First Amendment excuse for insulting police officers..."[34] Despite the San Antonio Police Chief's statement, insulting the police is consistently treated as constitutionally protected speech.[35][36][37] In State of Washington v. Marc D. Montgomery, a 15-year-old successfully won an appeal overturning his convictions for disorderly conduct and possession of marijuana on the grounds of free speech. Montgomery was arrested after shouting obscenities, such as "fucking pigs, fucking pig ass hole" at two police officers passing in their patrol car. Citing Cohen v. California, the Court ruled that Montgomery's words could not be classified as fighting words, and restricting speech based merely on its offensiveness would result in a "substantial risk of suppressing ideas in the process."[38]

The rights exercised in a typical audit are freedom of speech and freedom of the press in the First Amendment, freedom from unreasonable searches and seizures in the Fourth Amendment, and the right to remain silent in the Fifth Amendment of the United States Constitution.

Auditors attempt to exercise their First Amendment right to photograph and record in public while avoiding committing any crime. The reason for this stems from the Supreme Court's decision in Terry v. Ohio which held that it was not a violation of the Fourth Amendment to detain someone when the officer has reasonable articulable suspicion that crime is "afoot". Further, following the Supreme Court's decision in Hiibel v. Sixth Judicial District Court of Nevada, the Court held that in States that have stop and identify statutes, a person may be required to provide their name to an officer who has reasonable articulable suspicion that the person has committed, is committing, or is about to commit a crime.

The conflict with law enforcement officers generally arises because officers sometimes deem photography, in and of itself, "suspicious behavior" and use that as a reason to detain an Auditor and demand identification. Universally, Courts that have reviewed this specific issue have held that the fact that a person takes a photograph or makes an audio or video recording in a public place or in a place he or she has the right to be, does not constitute, in and of itself, a reasonable suspicion to detain the person, probable cause to arrest the person, or a sufficient justification to demand identification. Some states have even revised their penal code to reflect that issue.[39] Nonetheless, officers frequently detain or arrest auditors for "suspicious behavior".[40][41]

One of the main problems that auditors face in subsequent lawsuits are the Supreme Court's decisions in Harlow v. Fitzgerald, and Anderson v. Creighton, which held that government officials, including officers, would be shielded from liability and damages as long as their conduct does not violate "clearly established statutory or constitutional rights".[42] Therefore, while a Fourth Amendment seizure claim might exist for an auditor who stood on a public sidewalk and took pictures of a police station only to be handcuffed and placed in the back of a patrol car, a First Amendment claim would be dismissed because although a violation occurred, it was not "clearly established".[43] Qualified immunity allows "all but the plainly incompetent or those who knowingly violate the law" to escape liability for egregious and obvious violations of civil rights.[44] So far the 1st, 3rd,[45] 5th, 7th,[46] 9th,[47] and 11th[48] Circuits have held that recording the police in the course of their official duties is a clearly established right.

Auditing can be controversial due to the confrontational tactics of some auditors, which some may see as intimidation or harassment.[49] In addition, many public employees are not familiar with handling people walking around silently recording their interactions. While the conduct is generally legal, such activity may cause some people to feel alarmed. Some auditors cite independent research into relevant laws, pointing out that they are currently being recorded by cameras in the building, or by stating that there is no expectation of privacy in public.

Audits are even more confrontational when aggressive auditors engage in verbal disputes with government employees. Some auditors may use profane language during an audit. Some may confuse obscenity for profanity, and while the latter is generally protected by the first amendment, the right to engage in a verbal dispute depends highly on the circumstances. While on public streets, parks, or sidewalks, the right to free speech is at its highest, as one is within a traditional public forum. However, in limited public forums, such as public buildings, meeting rooms, and other public lobbies, the right to free speech may be more limited.

One auditor stated that the goal of an audit is to "put yourself in places where you know chances are the cops are going to be called. Are they going to uphold the constitution, uphold the law ... or break the law?"[50] Auditors state that they seek to educate the public that photography is not a crime, while publicizing cases where officers illegally stop what is perceived as illegal conduct.[51][52]

An auditor selects a public facility and then films the entire encounter with staff and customers alike. If no confrontation or attempt to stop the filming occurs, then the facility passes the audit;[53] if an employee attempts to stop a filming event, it fails the audit.[54]

Some auditors are concerned that if officers are willing to harass, detain, and arrest auditors, who intentionally avoid doing anything that might be considered a crime, normal citizens might shy away from recording officers for fear of retaliation.[55][56] In 2017, Justice Jacques Wiener of the U.S. Court of Appeals for the 5th Circuit wrote a federal appeals decision in favor of an auditor who was detained for filming police officers; Filming the police contributes to the publics ability to hold the police accountable, ensure that police officers are not abusing their power, and make informed decisions about police policy.[6]

Original post:

First Amendment audits - Wikipedia

Is It Time to Set the First Amendment on FIRE? | Opinion – Newsweek

I miss the old ACLU.

You know the one I'm talking about: The American Civil Liberties Union that defended the First Amendment right of Nazis to march at Skokie, Illinois. The one that sided with homophobic pastor Fred Phelps and his church when it protested the funerals of dead American servicemen.

The ACLU's cases have sometimes involved terrible people with terrible causes saying terrible things. Nobody with good taste or decent morals and certainly no one on the left side of America's political spectrum would ordinarily choose to associate themselves with the infamous scoundrels and bigots the organization has occasionally aided over the years. Even so, it has usually been comforting to know that the ACLU is on the case. If Fred Phelps is protected by the Constitution, after all, then the rest of us are, too.

It's not always like that, anymore.

Oh, the ACLU still takes on free speech cases and unpopular clients: Last month it argued an appeals case on behalf of a high school student who made a Holocaust joke. "In doing so, we were only doing what we have always donedefending speech rights for all, even those with whom we disagree," David Cole, the group's national legal director, wrote recently in The Nation.

But reporting in recent years suggests the ACLU has drifted away from its moorings as the nation's premiere defender of the First Amendment, struggling instead to balance its commitment to free expression with progressive stances on behalf of racial and sexual minorities. That would reflect a growing notion on the left that perhaps the Trumpist Age of Disinformation has revealed the limits of unfettered expression as a democratic virtue.

The ACLU's old guard worries something is being lost. Take David Goldberger, the attorney who argued on behalf of the Skokie Nazis. "Liberals," he warned last year, "are leaving the First Amendment behind."

So it's interesting and maybe even encouraging to see another group step forward to claim the mantle. The Foundation for Individual Rights in Education (FIRE), a group that's waged free speech battles on university campuses around the country, announced this week that it is rebranding itself. FIRE is now the Foundation for Individual Rights and Expression, a name change that brings with it a broader mandate and a plan to spend $75 million over the next three years on free speech education and litigation.

"Once the ACLU backs off its traditional role, who else is there?" said Ira Glasser, who ran the organization for more than two decades and now sits on FIRE's advisory board. (Former ACLU president Nadine Strossen is also on FIRE's board.)

Let's backtrack a bit, and acknowledge that the progressive reconsideration of free speech is nothing if not understandable. The ACLU's own evolution was sparked by its 2017 efforts on behalf of neo-Nazis whose angry "Unite the Right" protests at Charlottesville culminated in the death of Heather Heyer and gave us then-President Donald Trump's ugly "very fine people on both sides" equivocation between the racist and anti-racist demonstrators. Maybe there's something to the idea that "First Amendment protections are disproportionately enjoyed by people of power and privilege," as one former ACLU staffer put it. And maybe there's something to the idea that the Internet-fueled explosion of lies and conspiracy theories means we're no longer competing in a "marketplace of ideas," but instead collectively being forced to slog through an exhausting swamp of falsehood. Even for its most committed adherents, there can be days when the First Amendment doesn't look so wonderful.

That's not the whole story, though.

Yes, the First Amendment protected the marchers at Skokie in 1978 but it was also a "crucial tool" for protesters during the Civil Rights Era. Maybe Westboro Baptist Church was protected in shouting its vile anti-gay slurs in public, but so were gay and lesbian demonstrations and newspapers that were the targets of would-be censors. In America, marginalized groups have been able to advance their cause because of our country's legal commitment to free speech.

"Especially for groups that are minorities, whether political dissidents or racial or other demographic minorities, (they) absolutely depend on robust free speech and are smothered by censorship," Strossen told me last year.

Indeed, the latest government-sponsored efforts to stifle speech the "Don't Say Gay" bill in Florida, any number of state bills intended to limit young people's access to books about racism and sexual identity are aimed directly at the the ability of those minority groups to tell their story. If those laws are defeated in court, it probably will happen because the First Amendment doesn't just protect people of power and privilege.

That makes free expression an idea worth continued defense by the progressives, even in these confusing and dangerous times. David Goldberger worries the left is leaving the First Amendment behind. It's not too late to come back.

Joel Mathis is a writer based in Lawrence, Kansas. His work has appeared in The Week, Philadelphia Magazine, the Kansas City Star, Vice and other publications. His honors include awards for best online commentary from the Online News Association and (twice) from the City and Regional Magazine Association.

The views expressed in this article are the writer's own.

Continued here:

Is It Time to Set the First Amendment on FIRE? | Opinion - Newsweek

Jeff Lee: Reflect on the First Amendment – Amherst Bulletin

Last weeks front-page story, Advocacy for site of new Amherst school called unwarranted that leaders of the political action committee Amherst Forward are arguing that Amherst citizens should refrain from offering an opinion on the best site for the roughly $100 million new elementary school and let the Elementary School Building Committee do its job.

The PAC doesnt seem to understand just how anti-democratic this message sounds. Perhaps Americans should continue to allow Congress to do its job and kick the gun control can down the road for another 10 years. Maybe women should keep silent and let the Supreme Court do its job of overturning Roe v. Wade. And similarly, Russians should stay out of Putins way while he does his insane job of barbarizing Eastern Ukraine.

Citizen engagement in the government decision-making process is a pillar of healthy democracies, and the right to voice ones opinion is enshrined in the U.S. Constitution. Perhaps Amherst Forward should reflect on whether their authoritarian narrative is really serving the Town of Amherst well.

Jeff Lee

Amherst

Continued here:

Jeff Lee: Reflect on the First Amendment - Amherst Bulletin