This Startup Is Lowering Companies Healthcare Costs With AI – Entrepreneur

Healthcare costs are rapidly increasing. For companies that provide health insurance for their employees, theyve been getting hit with higher and higher premiums every year with no end in sight.

One Chicago-based startup experiencing explosive growth has been tackling this very problem. This company leverages artificial intelligence and chatbot technology to help employees navigate their health insurance and use less costly services. As a result, both the employee and employer end up saving money.

Justin Holland, CEO and co-founder of HealthJoy, has a strong grasp on how chatbots are going to change healthcare and save companies money in the process. I spoke with Holland to get his take on what CEOs need to know about their health benefits and how to contain costs.

Related:CanArtificial IntelligenceIdentify Pictures Better than Humans?

Whats the biggest problem with employer-sponsored health insurance? Why have costs gone up year after year faster than the rate of inflation?

One of the biggest issues for companies is that health insurance is kind of like giving your employees a credit card to go to a restaurant that doesnt have any prices. They are going to order whatever the waiter suggests to them that sounds good. Theyll order the steak and lobster, a bottle of wine and dessert. Employees have no connection to the actual cost of any of the medical services they are ordering. Several studies show that the majority of employees dont understand basic insurance terms needed to navigate insurance correctly. And its not their fault. The system is unnecessarily complex. Companies have finally started to realize that if they want to start lowering their healthcare costs, they need to start lowering their claims. The only way they are going to start doing that is by educating their employees and helping them to navigate the healthcare system. They need to provide advocates and other services that are always available to help.

Related:The Growth ofArtificial Intelligencein Ecommerce (Infographic)

Ive had an advocacy service previously that was just a phone number and I never used it. I actually forgot to use it all year and only remembered I had it when they changed my insurance plan and I saw the paperwork again. How is HealthJoy different?Is this where chatbots come in?

Phone-based advocacy services are great but youve identified their biggest problem: no one uses them. They are cheap to provide, so a lot of companies will bundle them in with their employee benefits packages, but they have zero ROI or utilization. Our chatbot JOY is the hub for a lot of different employee benefits including advocacy. JOYs main job is to route people to higher quality, less expensive care. She is fully supported by our concierge staff here in Chicago. They do things like call doctors offices to book appointments, verify network participation and much more. Our app is extremely easy to use and has been refined over the last three years to get the maximum engagement and utilization for our members.

Related:Why Tech Companies Are Pumping Money IntoArtificial Intelligence

Ive played around with your app. You offer a lot more than just an advocacy service. I see that you can also speak with a doctor in the app.

Yes, advocacy through JOY and our concierge team really is just the glue that binds our cost saving strategies. We also integrate telemedicine within the app so an employee can speak with a doctor 24/7 for free. This is another way we save companies money. We avoid those cases where someone needs to speak with a doctor in the middle of the night for a non-emergency and ends up at the emergency room or urgent care. Avoiding one trip to the emergency room can save thousands of dollars. Telemedicine has been around for a few years but, like advocacy, getting employees to use it has always been the big issue. Since we are the first stop for employee's healthcare needs, we can redirect them to telemedicine when it fits. We actually get over 50% of our telemedicine consults from when a member is trying to do something else. For example, they might be trying to verify if a dermatologist is within their insurance plan. Well ask them if they want to take a photo of an issue and have an instant consultation with one of our doctors. This is one of the reasons that employers are now seeing utilization rates that are sometimes 18X the industry standard. Redirecting all these consultations online is a huge savings to companies.

Related:4 WaysArtificial IntelligenceBoosts Workforce Productivity

What other services do you provide within the app?

We actually offer a lot of services and its constantly growing. Employers can even integrate their existing offerings as well. Healthcare is best delivered as a conversation, and thats why our AI-powered chatbot is perfect to service such a wide variety of offerings. The great thing is that its all delivered within an app that looks no more complex than Facebook Messenger or iMessage.

Right now we do medical bill reviews and prescription drug optimization. Well find the lowest prices for a procedure, help people with their health savings account and push wellness information. Our platform is like an operating system for healthcare engagement. The more we can engage with a company's employees for their healthcare needs, the more we can save both the employer and employees money.

Related:Artificial Intelligence- A Friend or Foe for Humans

It sounds like you're trying to build the Siri of healthcare, no?

In a way, yes. Basically, we are trying to help employers reduce their healthcare costs by providing their employees with an all-in-one mobile app that promotes smart healthcare decisions. JOY will proactively engage employees, connect them with our benefits concierge team and redirect to lower-cost care options like telemedicine. We integrate each client's benefits package and wellness programs to deliver a highly personalized experience that drives real ROI and improves workplace health.

So if a company wants to launch HealthJoy to their employees, do they need to just tell them to download your app?

We distribute HealthJoy to companies exclusively through benefits advisors, who are experts in developing plan designs and benefits strategies that work, both for employees and the bottom line. We always want HealthJoy to be integrated within a thoughtful strategy that leverages the expertise the benefits advisor provides, and we rely on them to upload current benefits and plan information.

Marsha is a Growth Marketing Expertbusiness advisor and speaker with specialism in international marketing.

The rest is here:

This Startup Is Lowering Companies Healthcare Costs With AI - Entrepreneur

AI sale plan details sought from Centre – The Hindu

A Parliamentary Standing Committee has sought details from the government on its strategic disinvestment plans for national carrier Air India.

The department-related Parliamentary Standing Committee on Transport, Tourism and Culture, chaired by Rajya Sabha Member of Parliament Mukul Roy, is set to meet the Central government officials on Friday.

To hear the views of the Ministry of Civil Aviation, Department of Investment and Public Asset Management (Ministry of Finance) and Air India on Disinvestment of Air India, the agenda of the meeting said.

The Cabinet Committee on Economic Affairs (CCEA), chaired by Prime Minister Narendra Modi, on June 28 gave its in-principle approval for the strategic disinvestment of Air India and its subsidiaries.

The CCEA also set up a group of ministers under Finance Minister Arun Jaitley to examine the modalities of the national carriers stake sale. The Ministerial group will decide upon the treatment of unsustainable debt of Air India, hiving off of certain assets to shell company, de-merger and strategic disinvestment of three profit-making subsidiaries, quantum of disinvestment and the universe of bidders.

Minister of State Civil Aviation Jayant Sinha told the Rajya Sabha on Tuesday that the decision to divest a stake in Air India was based on government think-tank NITI Aayogs recommendations in May this year.

In its recommendations, the Aayog had given the rationale for the disinvestment of Air India and has attributed the main reason as fragile finances of the company. AI has been incurring continuous losses and has huge accumulated losses, Mr. Sinha said in a written reply.

Further, NITI Aayog in its report on Air India says that further support to an unviable non-priority company in a matured and competitive aviation sector would not be the best use of scarce financial resources of the Government, Mr. Sinha added.

Mr. Sinha said in the Lok Sabha on Thursday that Air Indias market share on domestic routes has reduced from 17.9% in 2014-15 to 14.2% in 2016-17.

Air India has accumulated total debt of 48,876 crore till March 2017. The national carrier has been reporting continuous losses due to its high debt with its net loss at 3,728 crore in 2016-17 compared with 3,836 crore in 2015-16.

Hours after the Union Cabinet gave its nod to Air India's strategic disinvestment, Indias largest low-cost carrier IndiGo expressed interest in acquiring a stake in its airline business, mainly related to its international operations. Tata Sons were also reportedly in talks with the government seeking details on the national carriers strategic disinvestment.

Continued here:

AI sale plan details sought from Centre - The Hindu

Google Researchers Create AI-ception with an AI Chip That Speeds Up AI – Interesting Engineering

Reinforcement learning algorithms may be the next best thing since sliced bread for engineers looking to improve chip placement.

Researchers from Google have created a new algorithm that has learned how to optimize the placement of the components in a computer chip, so as to make it more efficient and less power-hungry.

SEE ALSO: WILL AI AND GENERATIVE DESIGN STEAL OUR ENGINEERING JOBS?

Typically, engineers can spend up to 30 hours configuring a single floor plan of chip placement, or chip floor planning. This complicated 3D design problem requires the configuration of hundreds, or even thousands, of components across a number of layers in a constrained area. Engineers will manually design configurations to minimize the number of wires used between components as a proxy for efficiency.

Because this is time-consuming, these chips are designed to only last between two and five years. However, as machine-learning algorithms keep improving year upon year, a need for new chip architectures has also arisen.

Facing these challenges, Google researchers Anna Goldie and Azalia Mirhoseini, have looked into reinforcement learning. These types of algorithms use positive and negative feedback in order to learn new and complicated tasks. Thus, the algorithm is either "rewarded" or "punished" depending on how well it learns a task. Following this, it then creates tens to hundreds of thousands of new designs. Ultimately, it creates an optimal strategy on how to place these chip components.

After their tests, the researchers checked their designs with the electronic design automation software and discovered that their method's floor planning was much more effective than the ones human engineers designed. Moreover, the system was able to teach its human workers a new trick or two.

Progress in AI has been largely interlinked with progress is computer chip design. The researchers' hope is that their new algorithm will assist in speeding up the chip design process and pave the way for new and improved architectures, which would ultimately accelerate AI.

The rest is here:

Google Researchers Create AI-ception with an AI Chip That Speeds Up AI - Interesting Engineering

John Lennox: 2084 and AI – mapping out the territory – The Irish News

WE humans are insatiably curious. We have been asking questions since the dawn of history. We've especially been asking the big questions about origin and destiny: Where do I come from and where am I going?

Their importance is obvious. Our answer to the first shapes our concepts of who we are, and our answer to the second gives us goals to live for.

Taken together, our responses to these questions help frame our worldview, the narrative that gives our lives their meaning.

The problem is that these are not easy questions, as we see from the fact that many and contradictory answers are on offer.

Yet, by and large, we have not let that hinder us. Over the centuries, humans have proposed some answers given by science, some by philosophy, some based on religion, others on politics, etc.

Two of the most famous futuristic scenarios are the 1931 novel Brave New World by Aldous Huxley and George Orwell's novel 1984, published in 1949.

Both of them have, at various times, been given very high ranking as influential English novels. For instance, Orwell's was chosen in 2005 by Time magazine as one of the 100 best English-language novels from 1923 to 2005.

Both novels are dystopian: that is, according to the Oxford English Dictionary, "they describe an imaginary place or condition that is as bad as possible".

However, the really bad places that they describe are very different, and their differences, which give us helpful insights that will be useful to us later, were succinctly explained by sociologist Neil Postman in his highly regarded work Amusing Ourselves to Death: "Orwell warns that we will be overcome by an externally imposed oppression.

"But in Huxley's vision, no Big Brother is required to deprive people of their autonomy, maturity and history. As he saw it, people will come to love their oppression, to adore the technologies that undo their capacities to think.

"What Orwell feared were those who would ban books. What Huxley feared was there would be no reason to ban a book, for there would be no-one who wanted to read one.

"Orwell feared those who would deprive us of information. Huxley feared those who would give us so much that we would be reduced to passivity and egoism.

"Orwell feared that the truth would be concealed from us. Huxley feared that the truth would be drowned in a sea of irrelevance.

"Orwell feared we would become a captive culture. Huxley feared we would become a trivial culture... In short, Orwell feared that what we hate will ruin us. Huxley feared that what we love will ruin us."

Orwell introduced ideas of blanket surveillance in a totalitarian state, of "thought control" and "newspeak", ideas that nowadays increasingly come up in connection with developments in artificial intelligence (AI), particularly the attempt to build computer technology that can do the sorts of things that a human mind can do - in short, the production of an imitation mind.

Billions of dollars are now being invested in the development of AI systems, and not surprisingly, there is a great deal of interest in where it is all going to lead...

Read the rest here:

John Lennox: 2084 and AI - mapping out the territory - The Irish News

BSC Participates in State Observatory Initiative Leveraging Big Data and AI That Aims to Detect, Prevent Epidemics – HPCwire

July 17, 2020 Catalonia is the first State Epidemiological Observatory that will use Big Data and artificial intelligence techniques to generate a new collection of innovative epidemiological models for public health institutions that help them prevent, detect early, and mitigate the spread of epidemics.

This public-private initiative, which is part of the Catalonia.AI strategy, joins the efforts of the Catalan Government, medical and health institutions (Germans Trias i Pujol Hospital and Fundacin Lucha contra el Sida), leading technological research centres (BSC, CIDA, Eurecat, URV and CSIC), mobile phone operators (Telefnica, Orange and GSMA) and Mobile World Capital Barcelona.

The Barcelona Supercomputing Center (BSC) is one of the participating centres and its task will be to collaborate in the development of a pandemic model for future prevention, including all data sources, and also in the storage of data, computing, health data management and meteorological data computing

In the BSC Life Science department, an integrated geographic information system is being developed that includes COVID-19 case data, hospital situations, population data, weather data, and mobility patterns between regions.

Given the heterogeneity and complexity of the data, tools are being developed for the analysis and visualization of information based on the analysis of complex networks and time series. In a complementary way and in collaboration with the group led by Dr. Prof. Alex Arenas of the Rovira i Virgili University, the information system is being used to calibrate and validate predictive epidemiological models.

The objective of the initiative is to develop and provide an integrated information system that allows, on the one hand, to generate periodic reports to monitor the health situation. And, on the other hand, it is oriented to the development and application of epidemiological models as a tool to assist decision-making by health authorities.

Big Data for the prevention of epidemics

The creation of the Epidemiological Observatory consists of two phases (corresponding to the years 2020 and 2021, respectively), the first of which will consist of creating and analyzing a mathematical model to compare and predict specific patterns of epidemics, from influenza and COVID-19. This is the objective of the Observatorys first research project, Big Data for the prevention of epidemics, which will apply Big Data technology and artificial intelligence to clinical data, mobile phone data, census data and meteorological data. The treatment of all the data will be done at all times in anonymized form and in no case will it imply traceability of the users. The first results of this phase can be seen in the fall.

This project aims, on the one hand, a massive improvement in the model of pandemic spread thanks to the inclusion of clinical, mobile, census and climatological data, and on the other hand, to provide public health organizations with a support system in decision-making based on innovative epidemiological models that allow them to anticipate and draw up a plan to face epidemics, as well as improve the management of public resources in areas such as the health system, mobility, education, etc., adapting them to real needs.

Treatment of data and privacy of people

One of the priorities of this Observatory is to define and implement a data processing model that fully guarantees the privacy of individuals. In this sense, in order to study the transmission of the virus, we will work with aggregate mobility data, and not individual data, following the recommendations and best practices of the European Commission regarding the use of mobile data to combat COVID -19.

The budget associated with the Observatory is 600,000 for the two planned years of the project, to which must be added the costs of transferring data from mobile operators in phase 2 (in phase 1 they are transferred free of charge). The Catalan Government will finance 50% of the project and the rest will be provided by participating partners and external funds through competitive calls.

The Observatory will be located at the Germans Trias i Pujol Hospital (HGTiP) and will have the scientific coordination of Dr. Bonaventura Clotet, head of the HGTiP Infectious Diseases Service and president of the Fight Against AIDS Foundation (FLS).

About BSC

Barcelona Supercomputing Center-Centro Nacional de Supercomputacin (BSC-CNS) is the national supercomputing centre in Spain. The center is specialised in high performance computing (HPC) and manage MareNostrum, one of the most powerful supercomputers in Europe, located in the Torre Girona chapel. BSC is involved in a number of projects to design and develop energy efficient and high performance chips, based on open architectures like RISC-V, for use within future exascale supercomputers and other high performance domains. The centre leads the pillar of the European Processor Project (EPI), creating a high performance accelerator based on RISC-V. More information:www.bsc.es

Source: Barcelona Supercomputing Center

See more here:

BSC Participates in State Observatory Initiative Leveraging Big Data and AI That Aims to Detect, Prevent Epidemics - HPCwire

WIMI Holographic AR+AI Vision Drives a New Wave of 5G Applications – Yahoo Finance

NEW YORK, NY / ACCESSWIRE / July 10, 2020 / As a leader in holographic vision, WIMI Hologram Cloud (WIMI) specializes in computer vision holographic cloud services. WIMI cover from the holographic AI computer vision synthesis, holographic visual presentation, holographic interactive software development and holographic AR online and offline advertising, holographic ARSDK pay, 5 g holographic communication software development, holographic face recognition and development, holographic AI in face of the development of the technology such as holographic AR multiple links, holographic cloud is a comprehensive technology solutions provider. Its business application scene is mainly gathered in five professional fields, such as home entertainment, lightfield cinema, performing arts system, commercial release system and advertising display system.

With the change of 5G holographic communication network bandwidth conditions, 5G holographic application market will usher in the explosion, holographic interactive entertainment, holographic conference, holographic conference and other high-end applications are gradually popularized to holographic social networking, holographic communication, holographic navigation, holographic family applications and other directions. WIMI plans to use holographic AI face recognition technology and holographic AI face changing technology as the core technologies to support holographic cloud platform services and 5G communication holographic applications with multiple innovative systems.

The world Artificial Intelligence Conference (WAIC) cloud Summit 2020 was held in Shanghai. Affected by the COVID-19 epidemic, this year WAIC carried out the "Cloud Exhibition" for the first time, including human holographic projection and real-time 3D cloud guest experience. It is noteworthy that the world conference on Artificial Intelligence under the epidemic, the participation of Internet leaders in different forms. Jack Ma, founder of Alibaba, and Elon Musk, co-founder and CEO of Tesla, held a conference via video link via holographic projection.

Ma talked about the epidemic gave him three points of thinking, the first point is that mankind cannot leave the earth, but the earth can leave mankind.

Ma said the epidemic has made us understand our strangeness to the earth. The earth might be better off away from us. Because of the outbreak, deer in Nara, Japan, are eating fewer snacks and becoming healthier.

Holography is not the latest technology, dating back to the 1960s. Holographic imaging technology USES the principle of interference and diffraction to record and reproduce the real image of the object, which requires more than 100 times more information than ordinary camera processing, which puts forward high requirements for the shooting, processing and transmission platform. Therefore, the earliest holographic technology is only used to process static photos. To realize ultra-low delay transmission of HD images, holographic imaging is expected to be commercialized in 5G.

Apple is also seen by some as critical to the future of augmented reality, despite limited traction for ARKit so far and its absence from smartglasses (again so far). Yet Facebook, Microsoft and others are arguably more important to where the market is today. While there are more AR platforms than just these companies, they represent the top of the pyramid for three different types of AR roadmap. And while startup insurgents could make a huge difference, big platforms can exert disproportionate influence on the future of tech markets. Facebook has talked about its long term potential to launch smartglasses, but in 2020 its primary presence in the AR market is as a mobile AR platform (note:Facebook is also a VR market leader with Oculus). Although there are other ways to define them, mobile AR platforms can be thought of as three broad types:

Story continues

Mobile AR software's installed base and commercial dynamics look like a variant of mobile, which plays to Facebook's strengths. Advertising could be mobile AR's biggest revenue stream both short and long term, making it critical to an advertising driven company like Facebook. It's worth noting that a lot of this adspend is going towards traditional ad units viewed around user generated mobile AR content (i.e. filters and lenses on messaging platforms), rather than just mobile AR ad units. This does not mean that sponsored mobile AR filters and lenses are not a significant part of the mix going forward.

As Digi-Capital has said since 2016, only Tim Cook and his inner circle really know what Apple is going to do in AR before they do it. This was proven in 2017, when Apple caught many (including us) by surprise with the launch of ARKit. The same thing happened in 2019, when Apple added a triple camera system to the back of the iPhone 11 Pro, instead of the rear facing depth sensor we had anticipated. So where in 2017 we fundamentally revised our forecasts post-ARKit, in 2020 we've done the same thing based on a revised view of Apple's potential roadmap.

As an emerging industry, the global holographic AR market has great growth potential and has attracted a lot of investment since 2016, making a great contribution to the growth of the industry. Several organizations, including RESEARCH and development, are investing heavily in the technology to develop solutions for businesses and consumer groups. Over the years, holographic augmented reality has been widely used in games, media and marketing. Its growing use in different sectors such as advertising, entertainment, education and retail is expected to drive demand during the forecast period.

Global Holographic AR market size by revenue in 2016,-2025

WIMI Hologram Cloud (WIMI) builds a real-time modeling system of multi-angle shooting: full-dimensional image scanning is performed on the collected objects, which is synthesized into a three-dimensional model in real time. Six-degree matrix optical field system: The imaging field of holographic virtual image is constructed by the comprehensive application of multiple light sources. Binocular parallax intelligent enhancement system: dynamically tracks the object trajectory and adjusts the light during the acquisition process to maintain the balanced value of binocular disparity. Multi-image dynamic fusion system: multi-dimensional image wide-angle acquisition technology in narrow space, applied to cloud vision miniaturized holographic warehouse. Holographic image high-speed processing algorithm: image information processing speed, and ensure the rendering effect, processing speed up to 10GB/ s. Stealth polyester optical imaging film: the key component of holographic imaging, so that the holographic image perfect imaging display. Holographic virtual figure painting sound reconstruction technology: the use of human bone dynamic capture, real-time image rendering, speech recognition technology, sound simulation technology to present the virtual human. Holographic cloud platform: An interactive platform with data storage, image restoration and holographic social properties covering the whole country. WIMI builds a complete 5G holographic communication application platform through the above system combination to support various online terminals and personal devices, and meanwhile expands various mainstream 5G holographic applications such as holographic social communication, holographic family interaction, holographic star interaction, holographic online education and holographic online conference.

After 5G landing, the first scene application will accelerate the DEVELOPMENT of VR/AR, and the growth rate of The Chinese market will be higher than that of the world. Therefore, with 5G blessing, the communication and transmission shortboards of VR/AR and other immersive game scenes will be made up, and it is expected that the commercial use of VR/AR of immersive games will be accelerated. According to the research of China Academy of Information and Communications Technology, the global VIRTUAL reality industry scale is close to 100 billion yuan, and the average annual compound growth rate from 2017 to 2022 is expected to exceed 70%. According to Greenlight, the global virtual reality industry scale will exceed 200 billion yuan in 2020, including 160 billion yuan in VR market and 45 billion yuan in AR market. In addition, for the Chinese market, according to IDC's latest "IDC Global Expenditure Guide for Augmented And Virtual Reality", the expenditure scale of AR/VR market in China will reach us $65.21 billion by 2023, a significant increase from the forecast of us $6.53 billion in 2019. Meanwhile, the CAGR for 2018-2023 will reach 84.6%, higher than the global market growth rate of 78.3%.

Holographic cloud business will combine with the depth of 5 g, 5 g in collaboration of high rate and low latency, remote communication and data transmission, from terminal to business server system transmission delay will average about 6 ms, well below the 4 g network transmission delay, ensure the holographic AR in the remote communication and data transmission without caton, low latency, and terminal collaboration in more, when the richness and diversity of interaction. Make the end + cloud collaboration collaboration more efficient. Enhanced mobile broadband (eMBB) and the Internet of things (IoT) application, makes beautiful micro holographic cloud of holographic holographic AR advertising business, and entertainment business, as well as the holographic interactive entertainment, holographic meeting, holographic social, holographic communication, holographic family holographic, etc., will be based on facial recognition technology and holographic 5 g + AI AI face in face of technology of the core technology for effective growth.

Due to the changes in 5G communication network bandwidth, high-end holographic applications are increasingly applied to social media, communication, navigation, home applications and other application scenarios. WIMI's plan is to provide holographic cloud platform services based on two core technologies: holographic ARTIFICIAL intelligence facial recognition technology and holographic ARTIFICIAL intelligence facial modification technology through 5G communication networks.

WIMI Team started from the agricultural civilization and labor to get rich, meet the demand of mankind itself production and development, to use tools, step by step into the era of industrial civilization development, WIMI Team through a variety of machines to help us solve the problem of life and production, and the development of artificial intelligence, mean that the future will be more human creativity and imagination, handed the artificial intelligence to solve the more labor to make artificial intelligence service.

Nowadays, with the development of the Internet era, the smart life scenes WIMI Team used to see in movies and TV shows are also gradually appearing in real family life. The development of Internet platforms and technology companies has brought us closer and closer to the smart era. Smart devices and smart home are just the beginning.

In the field of ARTIFICIAL intelligence, there have been a lot of investments in recent years, amounting to tens of billions of dollars. There have been many companies in the AI industry, such as WIMI, Alibaba's Dharma Institute, Huawei's 5G and various basic research platforms, all of which are exploring the business opportunities of AI in the future. Technology is making life better and closer to reality.

Media Contact:

Company: WIMIName: Tim WongTele: +86 10 89913328Email: bjoverseasnews@gmail.com

SOURCE: WIMI

View source version on accesswire.com: https://www.accesswire.com/597017/WIMI-Holographic-ARAI-Vision-Drives-a-New-Wave-of-5G-Applications

Original post:

WIMI Holographic AR+AI Vision Drives a New Wave of 5G Applications - Yahoo Finance

Is AI More Threatening Than North Korean Missiles? – NPR

In this April 30, 2015, file photo, Tesla Motors CEO Elon Musk unveils the company's newest products, in Hawthorne, Calif. Ringo H.W. Chiu/AP hide caption

In this April 30, 2015, file photo, Tesla Motors CEO Elon Musk unveils the company's newest products, in Hawthorne, Calif.

One of Tesla CEO Elon Musk's companies, the nonprofit start-up OpenAI, manufactures a device that last week was victorious in defeating some of the world's top gamers in an international video game (e-sport) tournament with a multi-million-dollar pot of prize money.

We're getting very good, it seems, at making machines that can outplay us at our favorite pastimes. Machines dominate Go, Jeopardy, Chess and as of now at least some video games.

Instead of crowing over the win, though, Musk is sounding the alarm. Artificial Intelligence, or AI, he argued last week, poses a far greater risk to us now than even North Korean warheads.

No doubt Musk's latest pronouncements make for good advertising copy. What better way to drum up interest in a product than to announce that, well, it has the power to destroy the world.

But is it true? Is AI a greater threat to mankind than the threat posed to us today by an openly hostile, well-armed and manifestly unstable enemy?

AI means, at least, three things.

First, it means machines that are faster, stronger and smarter than us, machines that may one day soon, HAL-like, come to make their own decisions and make up their own values and, so, even to rule over us, just as we rule over the cows. This is a very scary thought, not the least when you consider how we have ruled over the cows.

Second, AI means really good machines for doing stuff. I used to have a coffee machine that I'd set with a timer before going to bed; in the morning I'd wake up to the smell of fresh coffee. My coffee maker was a smart, or at least smart-ish, device. Most of the smart technologies, the AIs, in our phones, and airplanes, and cars, and software programs including the ones winning tournaments are pretty much like this. Only more so. They are vastly more complicated and reliable but they are, finally, only smart-ish. The fact that some of these new systems "learn," and that they come to be able to do things that their makers cannot do like win at Go or Dota is really beside the point. A steam hammer can do what John Henry can't but, in the end, the steam hammer doesn't really do anything.

Third, AI is a research program. I don't mean a program in high-tech engineering. I mean, rather, a program investigating the nature of the mind itself. In 1950, the great mathematician Alan Turing published a paper in a philosophy journal in which he argued that by the year 2000 we would find it entirely natural to speak of machines as intelligent. But more significantly, working as a mathematician, he had devised a formal system for investigating the nature of computation that showed, as philosopher Daniel Dennett puts it in his recent book, that you can get competence (the ability to solve problems) without comprehension (by merely following blind rules mechanically). It was not long before philosopher Hilary Putnam would hypothesize the mind is a Turing Machine (and a Turing Machine just is, for all intents and purposes, what we call a computer today). And, thus, the circle closes. To study computational minds is to study our minds, and to build an AI is, finally, to try to reverse engineer ourselves.

Now, Type 3 AI, this research program, is alive and well and a continuing chapter in our intellectual history that is of genuine excitement and importance. This, even though the original hypothesis of Putnam is wildly implausible (and was given up by Putnam decades ago). To give just one example: the problem of the inputs and the outputs. A Turing Machine works by performing operations on inputs. For example, it might erase a 1 on a cell of its tape and replace it with a 0. The whole method depends on being able to give a formal specification of a finite number of inputs and outputs. We can see how that goes for 1s and 0s. But what are the inputs, and what are the outputs, for a living animal, let alone a human being? Can we give a finite list, and specify its items in formal terms, of everything we can perceive, let alone, do?

And there are other problems, too. To mention only one: We don't understand how the brain works. And this means that we don't know that the brain functions, in any sense other than metaphorical, like a computer.

Type 1 AI, the nightmare of machine dominance, is just that, a nightmare, or maybe (for the capitalists making the gizmos) a fantasy. Depending on what we learn pursuing the philosophy of AI, and as luminaries like John Searle and the late Hubert Dreyfus have long argued, it may be an impossible fiction.

Whatever our view on this, there can be no doubt that the advent of smart, rather than smart-ish, machines, the sort of machines that might actually do something intelligent on their own initiative, is a long way off. Centuries off. The threat of nuclear war with North Korea is both more likely and more immediate than this.

Which does not mean, though, that there is not in fact real cause for alarm posed by AI. But if so, we need to turn our attention to Type 2 AI: the smart-ish technologies that are everywhere in our world today. The danger here is not posed by the technologies themselves. They aren't out to get us. They are not going to be out to get us any time soon. The danger, rather, is our increasing dependence on them. We have created a technosphere in which we are beholden to technologies and processes that we do not understand. I don't mean you and me, that we don't understand: No one person can understand. It's all gotten too complicated. It takes a whole team or maybe a university to understand adequately all the mechanisms, for example, that enable air traffic control, or drug manufacture, or the successful production and maintenance of satellites, or the electricity grid, not to mention your car.

Now this is not a bad thing in itself. We are not isolated individuals all alone and we never have been. We are a social animal and it is fine and good that we should depend on each other and on our collective.

But are we rising to the occasion? Are we tending our collective? Are we educating our children and organizing our means of production to keep ourselves safe and self-reliant and moving forward? Are we taking on the challenges that, to some degree, are of our own making? How to feed 7 billion people in a rapidly warming world?

Or have we settled? Too many of us, I fear, have taken up a "user" attitude to the gear of our world. We are passive consumers. Like the child who thinks chickens come from supermarkets, we are hopelessly alienated from how things work.

And if we are, then what are we going to do if some clever young person some where maybe a young lady in North Korea writes a program to turn things off? This is a serious and immediate pressing danger.

Alva No is a philosopher at the University of California, Berkeley, where he writes and teaches about perception, consciousness and art. He is the author of several books, including his latest, Strange Tools: Art and Human Nature (Farrar, Straus and Giroux, 2015). You can keep up with more of what Alva is thinking on Facebook and on Twitter: @alvanoe

See the article here:

Is AI More Threatening Than North Korean Missiles? - NPR

Analytics Insight Magazine Names ‘The 10 Most Innovative Global AI Executives’ – Business Wire

SAN JOSE, Calif. & HYDERABAD, India--(BUSINESS WIRE)--Analytics Insight Magazine, a brand of Stravium Intelligence has named The 10 Most Innovative Global AI Executives in its January issue.

The magazine issue features ten seasoned disruptors who have significantly contributed towards the AI-driven transformation of their respective organizations and industries. These foresighted innovators are driving the next-generation of intelligent offerings across current business landscapes globally. Here are the AI Executives who made the list:

Featuring as the Cover Story is Gary Fowler, who serves as the CEO, President, and Co-founder of GSD Venture Studios. Previously, he co-founded top CIS accelerators GVA and SKOLKOVO Startup Academy where majority of these companies achieved success soon after their launch. Most recently, Gary co-founded Yva.ai with David Yang, one of Russias most famous entrepreneurs.

The issue further includes:

Lane Mendelsohn: President of Vantagepoint AI, Lane is an experienced executive with a demonstrated history of working in the computer software industry. He is skilled in Business Planning, Analytical Skills, Sales, Enterprise Software, and E-commerce.

Chethan KR: Chethan is the CEO of SynctacticAI. He is an entrepreneur with 13+yrs of experience in the IT and software development industry. He looks at setting the vision for his companys platform, deriving growth strategies and establishing partnerships with industries.

Christopher Rudolf: Christopher is the Founder and CEO of Volv Global and has over 30 years of experience, as a technology entrepreneur and business advisor, working with many blue-chip organisations to solve their critical global scale data problems.

Kalyan Sridhar: Kalyan Sridhar is the Managing Director at PTC, responsible for managing its operations in India, Sri Lanka and Bangladesh. He has 28 years of experience in senior executive roles spanning Sales, Business Development, Business Operations and Channel Sales in the IT industry.

Kashyap Kompella: Kashyap serves as the CEO and Chief Analyst of rpa2ai Research, and has 20 years of experience as an Industry Analyst, Hands-on Technologist, Management Consultant and M&A Advisor to leading companies and startups across sectors.

Kumardev Chatterjee: Kumardev is the Co-founder and CEO of Unmanned Life and also serves as Founder and Chairman of the European Young Innovators Forum. He holds an MSc. in Computer Science from University College London.

Niladri Dutta: Niladri, CEO at Tardid Technologies, has a background in protocol stacks, large transactional applications, and analytics. He ensures that the company has a long-term strategy in place that keeps the team and customers excited all the time.

Sarath SSVS: Sarath serves as the CEO and Founder of two AI-driven companies, SeeknShop.IO and IntentBI. He is a seasoned innovator with over 13 years of experience in machine learning, data science, and product management.

Prithvijit Roy: Prithvijit is the Founder and CEO of BRIDGEi2i Analytics Solutions. His specialties are business analytics, big data, data mining, shared services, knowledge process outsourcing (KPO), analytics consulting services, managed analytics services, and significant others.

The disruptive wave of AI has made some significant impacts across multiple industries. AI breakthroughs have even influenced the vision and practices of top industry executives and pushed them towards becoming innovative AI pioneers of their own space. Ushering in the new era, more and more leaders are spurring the innovation and spearhead the transformation journey to translate their revamped vision into best AI practices.

Read the detailed coverage here. For more information, please visit https://www.analyticsinsight.net.

About Analytics Insight

Analytics Insight is an influential platform dedicated to insights, trends, and opinions from the world of data-driven technologies. It monitors developments, recognition, and achievements made by AI, big data and analytics companies across the globe. The Analytics Insight Magazine features opinions and views from top leaders and executives in the industry who share their journey, experiences, success stories, and knowledge to grow profitable businesses.

To set up an interview or advertise your brand, contact info@analyticsinsight.net.

Go here to read the rest:

Analytics Insight Magazine Names 'The 10 Most Innovative Global AI Executives' - Business Wire

Service that uses AI to identify gender based on names looks incredibly biased – The Verge

Some tech companies make a splash when they launch, others seem to bellyflop.

Genderify, a new service that promised to identify someones gender by analyzing their name, email address, or username with the help AI, looks firmly to be in the latter camp. The company launched on Product Hunt last week, but picked up a lot of attention on social media as users discovered biases and inaccuracies in its algorithms.

Type the name Meghan Smith into Genderify, for example, and the service offers the assessment: Male: 39.60%, Female: 60.40%. Change that name to Dr. Meghan Smith, however, and the assessment changes to: Male: 75.90%, Female: 24.10%. Other names prefixed with Dr produce similar results while inputs seem to generally skew male. Test@test.com is said to be 96.90 percent male, for example, while Mrs Joan smith is 94.10 percent male.

The outcry against the service has been so great that Genderify tells The Verge its shutting down altogether. If the community dont want it, maybe it was fair, said a representative via email. Genderify.com has been taken offline and its free API is no longer accessible.

Although these sorts of biases appear regularly in machine learning systems, the thoughtlessness of Genderify seems to have surprised many experts in the field. The response from Meredith Whittaker, co-founder of the AI Now Institute, which studies the impact of AI on society, was somewhat typical. Are we being trolled? she asked. Is this a psyop meant to distract the tech+justice world? Is it cringey tech April fools day already?

The problem is not that Genderify made assumptions about someones gender based on their name. People do this all the time, and sometimes make mistakes in the process. Thats why its polite to find out how people self-identify and how they want to be addressed. The problem with Genderify is that it automated these assumptions; applying them at scale while sorting individuals into a male/female binary (and so ignoring individuals who identify as non-binary) while reinforcing gender stereotypes in the process (such as: if youre a doctor youre probably a man).

The potential harm of this depends on how and where Genderify was applied. If the service was integrated into a medical chatbot, for example, its assumptions about users genders might have led to the chatbot issuing misleading medical advice.

Thankfully, Genderify didnt seem to be aiming to automate this sort of system, but was primarily designed to be a marketing tool. As Genderifys creator, Arevik Gasparyan, said on Product Hunt: Genderify can obtain data that will help you with analytics, enhancing your customer data, segmenting your marketing database, demographic statistics, etc.

In the same comment section, Gasparyan acknowledged the concerns of some users about bias and ignoring non-binary individuals, but didnt offer any concrete answers.

One user asked: Lets say I choose to identify as neither Male or Female, how do you approach this? How do you avoid gender discrimination? How are you tackling gender bias? To which Gasparyan replied that the service makes its decisions based on already existing binary name/gender databases, and that the company was actively looking into ways of improving the experience for transgender and non-binary visitors by separating the concepts of name/username/email from gender identity. Its a confusing answer given that the entire premise of Genderify is that this data is a reliable proxy for gender identity.

The company told The Verge that the service was very similar to existing companies who use databases of names to guess an individuals gender, though none of them use AI.

We understand that our model will never provide ideal results, and the algorithm needs significant improvements, but our goal was to build a self-learning AI that will not be biased as any existing solutions, said a representative via email. And to make it work, we very much relied on the feedback of transgender and non-binary visitors to help us improve our gender detection algorithms as best as possible for the LGBTQ+ community.

Update Wednesday July 29, 12:42PM ET: Story has been updated to confirm that Genderify has been shut down and to add additional comment from a representative of the firm.

Read more:

Service that uses AI to identify gender based on names looks incredibly biased - The Verge

Can a Crowdsourced AI Medical Diagnosis App Outperform Your Doctor? – Scientific American

Shantanu Nundy recognized the symptoms of rheumatoid arthritis when his 31-year-old patient suffering from crippling hand pain checked into Marys Center in Washington, D.C. Instead of immediately starting treatment, though, Nundy decided first to double-check his diagnosis using a smartphone app that helps with difficult medical cases by soliciting advice from doctors worldwide. Within a day, Nundys hunch was confirmed. The app had used artificial intelligence (AI) to analyze and filter advice from several medical specialists into an overall ranking of the most likely diagnoses. Created by the Human Diagnosis Project (Human Dx)an organization that Nundy directsthe app is one of the latest examples of growing interest in humanAI collaboration to improve health care.

Human Dx advocates the use of machine learninga popular AI technique that automatically learns from classifying patterns in datato crowdsource and build on the best medical knowledge from thousands of physicians across 70 countries. Physicians at several major medical research centers have shown early interest in the app. Human Dx on Thursday announced a new partnership with top medical profession organizations including the American Medical Association and the Association of American Medical Colleges to promote and scale up Human Dxs system. The goal is to provide timely and affordable specialist advice to general practitioners serving millions of people worldwide, in particular so-called "safety net" hospitals and clinics throughout the U.S. that offer access to care regardless of a patients ability to pay.

We need to find solutions that scale the capacity of existing doctors to serve more patients at the same or cheaper cost, says Jay Komarneni, founder and chair of Human Dx. Roughly 30 million uninsured Americans rely on safety net facilities, which generally have limited or no access to medical specialists. Those patients often face the stark choice of either paying out of pocket for an expensive in-person consultation or waiting for months to be seen by the few specialists working at public hospitals, which receive government funding to help pay for patient care, Komarneni says. Meanwhile studies have shown that between 25 percent and 30 percent (pdf) of such expensive specialist visits could be conducted by online consultations between physicians while sparing patients the additional costs or long wait times.

Komarneni envisions augmenting or extending physician capacity with AI to close this specialist gap. Within five years Human Dx aims to become available to all 1,300 safety net community health centers and free clinics in the U.S. The same remote consultation services could also be made available to millions of people around the world who lack access to medical specialists, Komarneni says.

When a physican needs help diagnosing or treating a patient they open the Human Dx smartphone app or visit the projects Web page and type in their clinical question as well as their working diagnosis. The physician can also upload images and test results related to the case and add details such as any medication the patient takes regularly. The physician then requests help, either from specific colleagues or the network of doctors who have joined the Human Dx community. Over the next day or so Human Dxs AI program aggregates all of the responses into a single report. It is the new digital equivalent of a curbside consult where a physician might ask a friend or colleague for quick input on a medical case without setting up a formal, expensive consultation, says Ateev Mehrotra, an associate professor of health care policy and medicine at Harvard Medical School and a physician at Beth Israel Deaconess Medical Center. It makes intuitive sense that [crowdsourced advice] would be better advice, he says, but how much better is an open scientific question. Still, he adds, I think its also important to acknowledge that physician diagnostic errors are fairly common. One of Mehrotra's Harvard colleagues has been studying how the AI-boosted Human Dx system performs in comparison with individual medical specialists, but has yet to publish the results.

Mehrotra's cautionary note comes from research that he and Nundy published last year in JAMA Internal Medicine. That study used the Human Dx service as a neutral platform to compare the diagnostic accuracy of human physicians with third-party symptom checker Web sites and apps used by patients for self-diagnosis. In this case, the humans handily outperformed the symptom checkers computer algorithms. But even physicians provided incorrect diagnoses about 15 percent of the time, which is comparable with past estimates of physician diagnostic error.

Human Dx could eventually help improve the medical education and training of human physicians, says Sanjay Desai, a physician and director of the Osler Medical Training Program at Johns Hopkins University. As a first step in checking the service's capabilities, he and his colleagues ran a study where the preliminary results showed the app could tell the difference between the diagnostic abilities of medical residents and fully trained physicians. Desai wants to see the service become a system that could track the clinical performance of individual physicians and provide targeted recommendations for improving specific skills. Such objective assessments could be an improvement over the current method of human physicians qualitatively judging their less experienced colleagues. The open question, Desai says, is whether the algorithms can be created to provide finer insights into an [individual] doctors strengths and weaknesses in clinical reasoning.

Human Dx is one of many AI systems being tested in health care. The IBM Watson Health unit is perhaps the most prominent, with the company for the past several years claiming that its AI is assisting major medical centers and hospitals in tasks such as genetically sequencing brain tumors and matching cancer patients to clinical trials. Studies have shown AI can help predict which patients will suffer from heart attacks or strokes in 10 years or even forecast which will die within five. Tech giants such as Google have joined start-ups in developing AI that can diagnose cancer from medical images. Still, AI in medicine is in its early days and its true value remains to be seen. Watson appears to have been a success at Memorial Sloan Kettering Cancer Center, yet it floundered at The University of Texas M. D. Anderson Cancer Center, although it is unclear whether the problems resulted from the technology or its implementation and management.

The Human Dx Project also faces questions in achieving widespread adoption, according to Mehrotra and Desai. One prominent challenge involves getting enough physicians to volunteer their time and free labor to meet the potential rise in demand for remote consultations. Another possible issue is how Human Dx's AI quality control will address users who consistently deliver wildly incorrect diagnoses. The service will also require a sizable user base of medical specialists to help solve those trickier cases where general physicians may be at a loss.

In any case, the Human Dx leaders and the physicians helping to validate the platform's usefulness seem to agree that AI alone will not take over medical care in the near future. Instead, Human Dx seeks to harness both machine learning and the crowdsourced wisdom of human physicians to make the most of limited medical resources, even as the demands for medical care continue to rise. The complexity of practicing medicine in real life will require both humans and machines to solve problems, Komarneni says, as opposed to pure machine learning.

See the rest here:

Can a Crowdsourced AI Medical Diagnosis App Outperform Your Doctor? - Scientific American

Twitter Working on Plan to Charge Users to Watch Videos

According to an internal email obtained by The Washington Post, Musk wants to have Twitter charge users to view videos posted by content creators.

Now that Tesla CEO Elon Musk has taken over Twitter, the billionaire has been frantically shuffling through ambitious plans to turn the ailing social media platform into a revenue-driving business.

Case in point, according to internal email obtained by The Washington Post, Musk is plotting for Twitter to charge users to view videos posted by content creators and take a cut of the proceeds — a highly controversial idea that's already been met with internal skepticism.

The team of Twitter engineers has "identified the risk as high" in the email, citing "risks related to copyrighted content, creator/user trust issues, and legal compliance."

In short, Musk is blazing ahead with his infamously ambitious timelines — a "move fast and break things" approach that could signify a tidal change for Twitter's historically sluggish approach to launching new features.

Musk has already made some big structural changes to Twitter, having fired high-up positions at the company and dissolved its board of directors.

The company will also likely be facing mass layoffs, according to The Washington Post.

The new feature detailed in the new email, which is being referred to as "Paywalled Video," allows creators to "enable the paywall once a video has been added to the tweet" and chose from a preset list of prices, ranging from $1 to $10.

"This will also give Twitter a revenue stream to reward content creators," Musk tweeted on Tuesday, adding that "creators need to make a living!"

But whether Twitter users will be willing to pay for stuff that was previously free remains anything but certain.

Musk has already announced that he is planning to charge $8 a month for Twitter users to stay verified, which has been met with derision.

The billionaire CEO is facing an uphill battle. Now that the company is private, he has to pay around $1 billion in annual interest payments, a result from his $44 buyout, according to the WaPo.

Compounding the trouble, Reuters reported last week that Twitter is bleeding some of its most active users.

Meanwhile, Musk's chaotic moves are likely to alienate advertisers, with the Interpublic Group, a massive inter-agency advertising group, recommending that its clients suspend all paid advertising for at least the week.

That doesn't bode well. It's not out of the question that a paywalled video feature may facilitate the monetization of pornographic content, which may end up scaring off advertisers even further — but Twitter's exact intentions for the feature are still unclear.

According to Reuters, around 13 percent of the site's content is currently marked not safe for work (NSFW).

It's part of Musk's attempt to shift revenue away from advertising on the platform. In a tweet last week, he promised advertisers that Twitter wouldn't become a "free-for-all hellscape."

But that hasn't stopped advertisers from already leaving in droves.

All in all, a paywalled video feature could mark a significant departure for Twitter, a platform still primarily known for short snippets of text.

For now, all we can do is watch.

READ MORE: Elon Musk’s Twitter is working on paid-video feature with ‘high’ risk [The Washington Post]

More on Twitter: Elon Musk Pleads With Stephen King to Pay for Blue Checkmark

The post Twitter Working on Plan to Charge Users to Watch Videos appeared first on Futurism.

Read more here:

Twitter Working on Plan to Charge Users to Watch Videos

This Deepfake AI Singing Dolly Parton’s "Jolene" Is Worryingly Good

Holly Herndon uses her AI twin Holly+ to sing a cover of Dolly Parton's

AI-lands in the Stream

Sorry, but not even Dolly Parton is sacred amid the encroachment of AI into art.

Holly Herndon, an avant garde pop musician, has released a cover of Dolly Parton's beloved and frequently covered hit single, "Jolene." Except it's not really Herndon singing, but her digital deepfake twin known as Holly+.

The music video features a 3D avatar of Holly+ frolicking in what looks like a decaying digital world.

And honestly, it's not bad — dare we say, almost kind of good? Herndon's rendition croons with a big, round sound, soaked in reverb and backed by a bouncy, acoustic riff and a chorus of plaintive wailing. And she has a nice voice. Or, well, Holly+ does. Maybe predictably indie-folk, but it's certainly an effective demonstration of AI with a hint of creative flair, or at least effective curation.

Checking the Boxes

But the performance is also a little unsettling. For one, the giant inhales between verses are too long to be real and are almost cajolingly dramatic. The vocals themselves are strangely even and, despite the somber tone affected by the AI, lack Parton's iconic vulnerability.

Overall, it feels like the AI is simply checking the boxes of what makes a good, swooning cover after listening to Jeff Buckley's "Hallelujah" a million times — which, to be fair, is a pretty good starting point.

Still, it'd be remiss to downplay what Herndon has managed to pull off here, and the criticisms mostly reflect the AI's limited capabilities more than her chops as a musician. The AI's seams are likely intentional, if her previous work is anything to go off of.

Either way, if you didn't know you were listening to an AI from the get-go, you'd probably be fooled. And that alone is striking.

The Digital Self

Despite AI's usually ominous implications for art, Herndon views her experiment as a "way for artists to take control of their digital selves," according to a statement on her website.

"Vocal deepfakes are here to stay," Herndon was quoted saying. "A balance needs to be found between protecting artists, and encouraging people to experiment with a new and exciting technology."

Whether Herndon's views are fatalistic or prudently pragmatic remains to be seen. But even if her intentions are meant to be good for artists, it's still worrying that an AI could pull off such a convincing performance.

More on AI music: AI That Generates Music from Prompts Should Probably Scare Musicians

The post This Deepfake AI Singing Dolly Parton's "Jolene" Is Worryingly Good appeared first on Futurism.

Excerpt from:

This Deepfake AI Singing Dolly Parton's "Jolene" Is Worryingly Good

Manslaughter Case Has a Strange Twist: Tesla That Killed Couple Was on Autopilot

A court case is about to kick off in Los Angeles later this month, involving a fatal crash caused by a Tesla vehicle, which was on Autopilot.

A provocative manslaughter case is about to kick off in Los Angeles later this month, involving a fatal crash caused by a Tesla vehicle that had the company's controversial Autopilot feature turned on.

It's the first case of its kind, and one that could set a precedent for future crashes involving cars and driver-assistance software, Reuters reports.

We won't know the exact defense until the case gets under way, but the crux is that the man who was behind the wheel of the Tesla is facing manslaughter charges — but has pleaded not guilty, setting up potentially novel legal arguments about culpability in a deadly collision when, technically speaking, it wasn't a human driving the car.

"Who's at fault, man or machine?" asked Edward Walters, an adjunct professor at the Georgetown University, in an interview with Reuters. "The state will have a hard time proving the guilt of the human driver because some parts of the task are being handled by Tesla."

The upcoming trial is about a fatal collision that took place in 2019. The crash involved Kevin George Aziz Riad, who ran a red light in his Tesla Model S, and collided with a Honda Civic, killing a couple who were reportedly on their first date.

According to vehicle data, Riad did not apply the brakes but had a hand on the steering wheel. Perhaps most critically, though, the Tesla's Autopilot feature was turned on in the moments leading up to the crash.

Riad is facing manslaughter charges, with prosecutors arguing his actions were reckless.

Meanwhile, Riad's lawyers have argued that he shouldn't be charged with a crime, but have so far stopped short of publicly placing blame on Tesla's Autopilot software.

Tesla is not directly implicated in the upcoming trial and isn't facing charges in the case, according to Reuters.

A separate trial, however, involving the family of one of the deceased is already scheduled for next year — but this time, Tesla is the defendant.

"I can't say that the driver was not at fault, but the Tesla system, Autopilot, and Tesla spokespeople encourage drivers to be less attentive," the family's attorney Donald Slavik told Reuters.

"Tesla knows people are going to use Autopilot and use it in dangerous situations," he added.

Tesla is already under heavy scrutiny over its Autopilot and so-called Full Self-Driving software, despite conceding that the features "do not make the vehicle autonomous" and that drivers must remain attentive of the road at all times.

Critics argue that Tesla's marketing is misleading and that it's only leading to more accidents — not making the roads safer, as Tesla CEO Elon Musk has argued in the past.

In fact, a recent survey found that 42 percent of Tesla Autopilot said they feel "comfortable treating their vehicles as fully self-driving."

Regulators are certainly already paying attention. The news comes a week after Reuters revealed that the Department of Justice is investigating Tesla over Autopilot.

Last year, the National Highway Traffic Safety Administration (NHTSA) announced an investigation of accidents in which Teslas have smashed into emergency response vehicles that were pulled over with sirens or flares.

This month's trial certainly stands the chance of setting a precedent. Was Riad fully at fault or was Tesla's Autopilot at least partially to blame as well?

The answer now lies in the hands of a jury.

READ MORE: Tesla crash trial in California hinges on question of 'man vs machine' [Reuters]

More on Autopilot: Survey: 42% of Tesla Autopilot Drivers Think Their Cars Can Drive Themselves

The post Manslaughter Case Has a Strange Twist: Tesla That Killed Couple Was on Autopilot appeared first on Futurism.

Go here to see the original:

Manslaughter Case Has a Strange Twist: Tesla That Killed Couple Was on Autopilot

Greta Thunberg Says UN Climate Conference Is a Scam and She’s Not Attending

The UN's upcoming COP27 climate conference in Egypt is basically a

COP Out

Ever since she lambasted world leaders at a UN conference in 2018 when she was only 15 years old, Swedish environmental activist Greta Thunberg has had the ear of the international community.

Now, Thunberg says she's skipping out on next week's COP27 UN climate summit in Egypt. Why? Because it's rife with "greenwashing."

"I'm not going to COP27 for many reasons, but the space for civil society this year is extremely limited," Thunberg said at a press event for her book, "The Climate Book," as quoted by The Guardian. "The COPs are mainly used as an opportunity for leaders and people in power to get attention, using many different kinds of greenwashing."

Ultimately, in Thunberg's view, the COP conferences "are not really meant to change the whole system" and instead only promote incremental change. Bluntly put, they're feel-good events that don't accomplish much, so she's bowing out.

Wasted Breath

It's not an unfair assessment. For all the pledges made to drastically cut back emissions and achieve net carbon zero by 2050, very few nations have followed through in the short term. And in Europe, the energy crisis in the wake of the war in Ukraine has further sidelined those climate commitments.

So we can't blame her for not going. But it's a bit disheartening that even a tenacious young spokesperson like Thunberg has given up on convincing world leaders at the biggest climate summit in the world.

Maybe it's indicative of the frustrations of her generation at large. When Thunberg was asked what she thought about the recent wave of Just Stop Oil protests that included activists throwing soup on a Van Gogh painting, she said that she viewed what many detractors perceived as a dumb stunt to be symptomatic of the world's failure to effect meaningful environmental change.

"People are trying to find new methods because we realize that what we have been doing up until now has not done the trick," she replied, as quoted by Reuters. "It's only reasonable to expect these kinds of different actions."

Maybe the real question is: if even a UN climate conference isn't the place to get the message out and change hearts, where's the right place, and what's the right way? If the headlines are any indication, zoomers are struggling to figure that out.

More on Greta Thunberg: Greta Thunberg Thinks Germany Shutting Down Its Nuclear Plants Is a Bad Idea

The post Greta Thunberg Says UN Climate Conference Is a Scam and She's Not Attending appeared first on Futurism.

View post:

Greta Thunberg Says UN Climate Conference Is a Scam and She's Not Attending

There’s Something Strange About How These Stars Are Moving, Scientists Say

Astronomers are puzzled by the strange behavior of a crooked cluster of stars, which appears to be following an alternative theory of gravity.

Astronomers are puzzled by the strange behavior of certain crooked clusters of stars, which appear to be violating our conventional understanding of gravity.

Massive clusters of stars usually are bound together in spirals at the center of galaxies. Some of these clusters fall under a category astrophysicists call open star clusters, which are created in a relatively short period of time as they ignite in a huge cloud of gas.

During this process, loose stars accumulate in a pair of "tidal tails," one of which is being pulled behind, while the other moves ahead.

"According to Newton’s laws of gravity, it’s a matter of chance in which of the tails a lost star ends up," Jan Pflamm-Altenburg of the University of Bonn in Germany, co-author of a new paper published in the Monthly Notices of the Royal Astronomical Society, in a statement. "So both tails should contain about the same number of stars."

But some of their recent observations seemingly defy conventional physics.

"However, in our work we were able to prove for the first time that this is not true," Pflamm-Altenburg added. "In the clusters we studied, the front tail always contains significantly more stars nearby to the cluster than the rear tail."

In fact, their new findings are far more in line with a different theory called "Modified Newtonian Dynamics" (MOND).

"Put simply, according to MOND, stars can leave a cluster through two different doors," Pavel Kroupa, Pflamm-Altenburg's colleague at the University of Bonn and lead author, explained in the statement. "One leads to the rear tidal tail, the other to the front."

"However, the first is much narrower than the second — so it’s less likely that a star will leave the cluster through it," he added. "Newton’s theory of gravity, on the other hand, predicts that both doors should be the same width."

The researchers' simulations, taking MOND into consideration, could explain a lot. For one, they suggest that open star clusters survive a much shorter period of time than what is expected from Newton's laws of physics.

"This explains a mystery that has been known for a long time," Kroupa explained. "Namely, star clusters in nearby galaxies seem to be disappearing faster than they should."

But not everybody agrees that Newton's laws should be replaced with MOND, something that could shake the foundations of physics.

"It’s somewhat promising, but it does not provide completely definitive evidence for MOND," University of Saint Andrews research fellow Indranil Banik told New Scientist. "This asymmetry does make more sense in MOND, but in any individual cluster there could be other effects that are causing it — it’s a bit unlikely that would happen in all of them, though."

The researchers are now trying to hone in on an even more accurate picture by stepping up the accuracy of their simulations, which could either support their MOND theory — or conclude that Newton was, in fact, correct the first time around.

More on star clusters: Something Is Ripping Apart the Nearest Star Cluster to Earth

The post There's Something Strange About How These Stars Are Moving, Scientists Say appeared first on Futurism.

Read the rest here:

There's Something Strange About How These Stars Are Moving, Scientists Say

NASA Sets Launch Date for Mission to $10 Quintillion Asteroid

After disappointing setbacks and delays, NASA has finally got its mission to an invaluable asteroid made of precious metals back on track.

Rock of Riches

After disappointing setbacks and a delay over the summer, NASA says it's finally reviving its mission to explore a tantalizing and giant space rock lurking deep in the Asteroid Belt.

Known as 16 Psyche, the NASA-targeted asteroid comprises a full one percent of the mass of the Asteroid Bet, and is speculated to be the core of an ancient planet. But Psyche's size isn't what intrigues scientists so much as its metal-rich composition, believed to be harboring a wealth of iron, nickel, and gold worth an estimated $10 quintillion — easily exceeding the worth of the Earth's entire economy. Although, to be clear, they're not interested in the metals' monetary value but rather its possibly planetary origins.

Back On Track

Initially slated to launch in August 2022, NASA's aptly named Psyche spacecraft became plagued with a persistent flight software issue that led the space agency to miss its launch window that closed on October 11.

But after surviving an independent review determining whether the mission should be scrapped or not, NASA has formally announced that its spacecraft's journey to Psyche will be going ahead, planned to launch aboard a SpaceX Falcon Heavy rocket as early as October 10, 2023.

"I'm extremely proud of the Psyche team," said Laurie Leshin, director of NASA's Jet Propulsion Laboratory, in a statement. "During this review, they have demonstrated significant progress already made toward the future launch date. I am confident in the plan moving forward and excited by the unique and important science this mission will return."

Although the new launch date is only a little over a year late, the expected arrival at the asteroid Psyche is set back by over three years — 2029 instead of 2026 — due to having to wait for another opportunity to slingshot off of Mars' gravity.

Peering Into a Planet

Once it arrives, the NASA spacecraft will orbit around the asteroid and probe it with an array of instruments, including a multispectral imager, gamma ray and neutron spectrometers, and a magnetometer, according to the agency.

In doing so, scientists hope to determine if the asteroid is indeed the core of a nascent planet known as a planetesimal. If it is, it could prove to be an invaluable opportunity to understand the interior of terrestrial planets like our own.

More on NASA: NASA Announces Plan to Fix Moon Rocket, and Maybe Launch It Eventually

The post NASA Sets Launch Date for Mission to $10 Quintillion Asteroid appeared first on Futurism.

Here is the original post:

NASA Sets Launch Date for Mission to $10 Quintillion Asteroid

China Plans to Send Monkeys to Space Station to Have Sex With Each Other

Chinese astronauts are reportedly planning to let monkeys loose on their brand-new space station to have them have sex with each other.

Chinese scientists are reportedly planning to send monkeys to its new Tiangong space station for experiments that will involve the animals mating and potentially reproducing, the South China Morning Post reports.

It's a fascinating and potentially controversial experiment that could have major implications for our efforts to colonize space: can mammals, let alone humans, successfully reproduce beyond the Earth?

According to the report, the experiment would take place in the station's largest capsule, called Wentian, inside two biological test cabinets that can be expanded.

After examining the behavior of smaller creatures, "some studies involving mice and macaques will be carried out to see how they grow or even reproduce in space," Zhang Lu, a researcher at the Chinese Academy of Sciences in Beijing, said during a speech posted to social media earlier this week, as quoted by the SCMP.

"These experiments will help improve our understanding of an organism’s adaptation to microgravity and other space environments," he added.

Some simpler organisms, including nematodes and Japanese rice fish, have been observed reproducing in space.

But more complex life forms have struggled. In 2014, a Russian experiment to see whether geckos could produce offspring in space failed when all the critters died.

And the failure rate for mammals, so far, has been total. Soviet Union scientists got mice to mate during a space flight in 1979, but none of them gave birth after being returned to Earth.

In other words, getting monkeys to reproduce on board a space station will be anything but easy. For one, just dealing with living creatures in space can pose immense challenges. The astronauts will "need to feed them and deal with the waste," Kehkooi Kee, a professor with the school of medicine at Tsinghua University, told the SCMP.

Then there's the fact that astronauts will have to keep the macaques happy and comfortable, something that experts say will be challenging since long term confinement in the spartan environments of space habitats could cause immense stress for the simians.

And even if astronauts successfully set the mood for the monkeys, the physics of sex in space are predicted to be challenging.

"Firstly, just staying in close contact with each other under zero gravity is hard," Adam Watkins, an associate professor of reproductive physiology at University of Nottingham, wrote in a 2020 open letter highlighted by the SCMP. "Secondly, as astronauts experience lower blood pressure while in space, maintaining erections and arousal are more problematic than here on Earth."

With its new space station in nearly full operation, China isn't shying away from asking some big questions — but whether these experiments will play out as expected is anything but certain.

READ MORE: Chinese scientists plan monkey reproduction experiment in space station [South China Morning Post]

More on sex in space: Scientists Say We Really Have to Talk About Boning in Space

The post China Plans to Send Monkeys to Space Station to Have Sex With Each Other appeared first on Futurism.

More here:

China Plans to Send Monkeys to Space Station to Have Sex With Each Other

Elon Musk Meeting With Advertisers, Begging Them Not to Leave Twitter

Advertisers are fleeing Twitter in droves now that Tesla CEO Elon Musk has taken over control. Now, he's trying to pick up the pieces and begging them to return.

Advertisers are fleeing Twitter in droves now that Tesla and SpaceX CEO Elon Musk has taken over control.

Ever since officially closing the $44 billion deal, Musk has been busy gutting the company's executive suite and dissolving its board. Senior executives, as well as Twitter's advertising chief Sarah Personette, have departed as well.

After all, Musk has been very clear about his disdain for advertising for years now.

The resulting uncertainty has advertisers spooked — major advertising holding company IPG has already advised clients to pull out temporarily — and the billionaire CEO is in serious damage mode.

Now, Reuters reports, Musk is spending most of this week meeting with advertisers in New York, trying to reassure them that Twitter won't turn into a "free-for-all hellscape."

According to one of Reuters' sources, the meetings have been "very productive" — but plenty of other marketers are far from satisfied.

Advertisers are reportedly grilling Musk over his plans to address the rampant misinformation being spread on the platform, a trend that Musk himself has been actively contributing to since the acquisition.

And if he's succeeding in ameliorating advertisers in private, he's antagonizing them publicly. On Wednesday, Musk posted a poll asking users whether advertisers should support either "freedom of speech," or "political 'correctness'" — a type of false dichotomy that echoes the rhetoric of far-right conspiracy theorists and conservative pundits.

"Those type of provocations are not helping to calm the waters," an unnamed media buyer told Reuters.

Some are going public with the same sentiment.

"Unless Elon hires new leaders committed to keeping this 'free' platform safe from hate speech, it's not a platform brands can/should advertise on," Allie Wassum, global media director for the Nike-owned shoe brand Jordan, wrote in a LinkedIn post.

So far, Musk's plans for the social media platform remain strikingly muddy. In addition to the behind-the-scenes advertising plays, he's also announced that users will have to pay to retain their verification badge, though he's engaged in a comically public negotiation as to what the cost might be.

He's also hinted that previously banned users — former US president Donald Trump chief among them — might eventually get a chance to return, but only once "we have a clear process for doing so, which will take at least a few more weeks."

The move was seen by many as a way to wait out the impending midterm elections. After all, Twitter has played a huge role in disseminating misinformation and swaying elections in the past.

While advertisers are running for the hills, to Musk advertising is clearly only a small part of the picture — even though historically, social giants like Twitter have struggled to diversify their revenue sources much beyond display ads.

Musk nodded to that reality in a vague open letter posted last week.

"Low relevancy ads are spam, but highly relevant ads are actually content!" he wrote in the note, addressed to "Twitter advertisers."

Big picture, Twitter's operations are in free fall right now and Musk has yet to provide advertisers with a cohesive plan to pick up the pieces.

While he's hinted at the creation of a new content moderation council made up of both "people from all viewpoints" and "wildly divergent views," advertisers are clearly going to be thinking twice about continuing their business with Twitter.

With or without advertising, Twitter's finances are reportedly in a very deep hole. The billions of dollars Musk had to borrow to finance his mega acquisition will cost Twitter around $1 billion a year in interest alone.

The company also wasn't anywhere near profitable before Musk took over, losing hundreds of millions of dollars in a single quarter.

Whether that picture will change any time soon is as unclear as ever, especially in the face of a wintry economy.

But, of course, Musk has proved his critics wrong before. So anything's possible.

READ MORE: Advertisers begin to grill Elon Musk over Twitter 'free-for-all' [Reuters]

More on the saga: Elon Musk Pulling Engineers From Tesla Autopilot to Work on Twitter

The post Elon Musk Meeting With Advertisers, Begging Them Not to Leave Twitter appeared first on Futurism.

See original here:

Elon Musk Meeting With Advertisers, Begging Them Not to Leave Twitter

Scientists Use Actual Lunar Soil Sample to Create Rocket Fuel

A team of Chinese researchers claim to have turned lunar regolith samples brought back by the country's Chang'e 5 mission into a source of fuel.

Fill 'Er Up

A team of Chinese researchers say they managed to convert actual lunar regolith samples into a source of rocket fuel and oxygen — a potential gamechanger for future space explorers hoping to make use of in-situ resources to fuel up for their return journey.

The researchers found that the lunar soil samples can act as a catalyst to convert carbon dioxide and water from astronauts' bodies and environment into methane and oxygen, as detailed in a paper published in the National Science Review.

"In situ resource utilization of lunar soil to achieve extraterrestrial fuel and oxygen production is vital for the human to carry out Moon exploitation missions," lead author Yujie Xiong said in a new statement about the work. "Considering that there are limited human resources at extraterrestrial sites, we proposed to employ the robotic system to perform the whole electrocatalytic CO2 conversion system setup."

That means we could have a much better shot at carrying out longer duration explorations of the lunar surface in the near future.

Set It, Forget It

According to the paper, which builds on previous research suggesting lunar soil can generate oxygen and fuel, this process can be completed using uncrewed systems, even in the absence of astronauts.

In an experiment, the team used samples from China's Chang'e-5 mission, which landed in Inner Mongolia back in December 2020 — the first lunar soil returned to Earth since 1976.

The Moon soil effectively acted as a catalyst, enabling the electrocatalytic conversion of carbon dioxide into methane and oxygen.

"No significant difference can be observed between the manned and unmanned systems, which further suggests the high possibility of imitating our proposed system in extraterrestrial sites and proves the feasibility of further optimizing catalyst recipes on the Moon," the researchers conclude in their paper.

Liquified

But there's one big hurdle to still overcome: liquifying carbon dioxide is anything but easy given the Moon's frosty atmosphere, as condensing the gas requires a significant amount of heat, as New Scientist reported earlier this year.

Still, it's a tantalizing prospect: an autonomous machine chugging away, pumping out oxygen and fuel for future visitors. But for now, it's not much more than a proof of concept.

READ MORE: Scientists investigate using lunar soils to sustainably supply oxygen and fuels on the moon [Science China Press]

More on lunar soil: Bad News! The Plants Grown in Moon Soil Turned Out Wretched

The post Scientists Use Actual Lunar Soil Sample to Create Rocket Fuel appeared first on Futurism.

Read more:

Scientists Use Actual Lunar Soil Sample to Create Rocket Fuel

Cats May Be Tampering With Crime Scenes, Scientists Say

Cats, ever the mischievous and frisky pets, may be harboring a lot more human DNA than once thought, possibly tampering crime scenes, a new study says.

Cat Burglar

Cats are known for not really minding their own business, getting their furry paws on just about anything they can.

And it turns out, this makes them effective vectors for DNA evidence, according to a study published last month in the journal Forensic Science International: Genetic Supplement Series.

Researchers collaborating with the Victoria Police Forensic Services Department in Australia found detectable human DNA in 80 percent of the samples collected from 20 pet cats, with 70 percent of the samples strong enough that they could be linked to a person of interest in a crime scene investigation.

"Collection of human DNA needs to become very important in crime scene investigations, but there is a lack of data on companion animals such as cats and dogs in their relationship to human DNA transfer," said study lead author Heidi Monkman, a forensic scientist at Flinders University, in a statement.

"These companion animals can be highly relevant in assessing the presence and activities of the inhabitants of the household, or any recent visitors to the scene."

Here Kitty

One possible takeaway is that cats — and other companion pets like dogs — could be harboring DNA that could help solve a case.

The bigger issue, though, is that pets could introduce foreign DNA that muddles a crime scene, possibly leading to an innocent person being implicated. A pet could be carrying the DNA of a complete stranger, or it might bring the DNA of its owner into a crime scene that they had nothing to do with.

Monkman's colleague and co-author of the paper, Maria Goray, is an experienced crime scene investigator and an expert in DNA transfer. She believes their findings could help clear up how pets might tamper a crime scene by carrying outside DNA.

"Are these DNA findings a result of a criminal activity or could they have been transferred and deposited at the scene via a pet?" Goray asked.

It's a question worth asking — especially because innocent people have been jailed off botched DNA science far too often.

More on DNA evidence: Cops Upload Image of Suspect Generated From DNA, Then Delete After Mass Criticism

The post Cats May Be Tampering With Crime Scenes, Scientists Say appeared first on Futurism.

View original post here:

Cats May Be Tampering With Crime Scenes, Scientists Say