This Startup Is Lowering Companies Healthcare Costs With AI – Entrepreneur

Healthcare costs are rapidly increasing. For companies that provide health insurance for their employees, theyve been getting hit with higher and higher premiums every year with no end in sight.

One Chicago-based startup experiencing explosive growth has been tackling this very problem. This company leverages artificial intelligence and chatbot technology to help employees navigate their health insurance and use less costly services. As a result, both the employee and employer end up saving money.

Justin Holland, CEO and co-founder of HealthJoy, has a strong grasp on how chatbots are going to change healthcare and save companies money in the process. I spoke with Holland to get his take on what CEOs need to know about their health benefits and how to contain costs.

Related:CanArtificial IntelligenceIdentify Pictures Better than Humans?

Whats the biggest problem with employer-sponsored health insurance? Why have costs gone up year after year faster than the rate of inflation?

One of the biggest issues for companies is that health insurance is kind of like giving your employees a credit card to go to a restaurant that doesnt have any prices. They are going to order whatever the waiter suggests to them that sounds good. Theyll order the steak and lobster, a bottle of wine and dessert. Employees have no connection to the actual cost of any of the medical services they are ordering. Several studies show that the majority of employees dont understand basic insurance terms needed to navigate insurance correctly. And its not their fault. The system is unnecessarily complex. Companies have finally started to realize that if they want to start lowering their healthcare costs, they need to start lowering their claims. The only way they are going to start doing that is by educating their employees and helping them to navigate the healthcare system. They need to provide advocates and other services that are always available to help.

Related:The Growth ofArtificial Intelligencein Ecommerce (Infographic)

Ive had an advocacy service previously that was just a phone number and I never used it. I actually forgot to use it all year and only remembered I had it when they changed my insurance plan and I saw the paperwork again. How is HealthJoy different?Is this where chatbots come in?

Phone-based advocacy services are great but youve identified their biggest problem: no one uses them. They are cheap to provide, so a lot of companies will bundle them in with their employee benefits packages, but they have zero ROI or utilization. Our chatbot JOY is the hub for a lot of different employee benefits including advocacy. JOYs main job is to route people to higher quality, less expensive care. She is fully supported by our concierge staff here in Chicago. They do things like call doctors offices to book appointments, verify network participation and much more. Our app is extremely easy to use and has been refined over the last three years to get the maximum engagement and utilization for our members.

Related:Why Tech Companies Are Pumping Money IntoArtificial Intelligence

Ive played around with your app. You offer a lot more than just an advocacy service. I see that you can also speak with a doctor in the app.

Yes, advocacy through JOY and our concierge team really is just the glue that binds our cost saving strategies. We also integrate telemedicine within the app so an employee can speak with a doctor 24/7 for free. This is another way we save companies money. We avoid those cases where someone needs to speak with a doctor in the middle of the night for a non-emergency and ends up at the emergency room or urgent care. Avoiding one trip to the emergency room can save thousands of dollars. Telemedicine has been around for a few years but, like advocacy, getting employees to use it has always been the big issue. Since we are the first stop for employee's healthcare needs, we can redirect them to telemedicine when it fits. We actually get over 50% of our telemedicine consults from when a member is trying to do something else. For example, they might be trying to verify if a dermatologist is within their insurance plan. Well ask them if they want to take a photo of an issue and have an instant consultation with one of our doctors. This is one of the reasons that employers are now seeing utilization rates that are sometimes 18X the industry standard. Redirecting all these consultations online is a huge savings to companies.

Related:4 WaysArtificial IntelligenceBoosts Workforce Productivity

What other services do you provide within the app?

We actually offer a lot of services and its constantly growing. Employers can even integrate their existing offerings as well. Healthcare is best delivered as a conversation, and thats why our AI-powered chatbot is perfect to service such a wide variety of offerings. The great thing is that its all delivered within an app that looks no more complex than Facebook Messenger or iMessage.

Right now we do medical bill reviews and prescription drug optimization. Well find the lowest prices for a procedure, help people with their health savings account and push wellness information. Our platform is like an operating system for healthcare engagement. The more we can engage with a company's employees for their healthcare needs, the more we can save both the employer and employees money.

Related:Artificial Intelligence- A Friend or Foe for Humans

It sounds like you're trying to build the Siri of healthcare, no?

In a way, yes. Basically, we are trying to help employers reduce their healthcare costs by providing their employees with an all-in-one mobile app that promotes smart healthcare decisions. JOY will proactively engage employees, connect them with our benefits concierge team and redirect to lower-cost care options like telemedicine. We integrate each client's benefits package and wellness programs to deliver a highly personalized experience that drives real ROI and improves workplace health.

So if a company wants to launch HealthJoy to their employees, do they need to just tell them to download your app?

We distribute HealthJoy to companies exclusively through benefits advisors, who are experts in developing plan designs and benefits strategies that work, both for employees and the bottom line. We always want HealthJoy to be integrated within a thoughtful strategy that leverages the expertise the benefits advisor provides, and we rely on them to upload current benefits and plan information.

Marsha is a Growth Marketing Expertbusiness advisor and speaker with specialism in international marketing.

The rest is here:

This Startup Is Lowering Companies Healthcare Costs With AI - Entrepreneur

AI sale plan details sought from Centre – The Hindu

A Parliamentary Standing Committee has sought details from the government on its strategic disinvestment plans for national carrier Air India.

The department-related Parliamentary Standing Committee on Transport, Tourism and Culture, chaired by Rajya Sabha Member of Parliament Mukul Roy, is set to meet the Central government officials on Friday.

To hear the views of the Ministry of Civil Aviation, Department of Investment and Public Asset Management (Ministry of Finance) and Air India on Disinvestment of Air India, the agenda of the meeting said.

The Cabinet Committee on Economic Affairs (CCEA), chaired by Prime Minister Narendra Modi, on June 28 gave its in-principle approval for the strategic disinvestment of Air India and its subsidiaries.

The CCEA also set up a group of ministers under Finance Minister Arun Jaitley to examine the modalities of the national carriers stake sale. The Ministerial group will decide upon the treatment of unsustainable debt of Air India, hiving off of certain assets to shell company, de-merger and strategic disinvestment of three profit-making subsidiaries, quantum of disinvestment and the universe of bidders.

Minister of State Civil Aviation Jayant Sinha told the Rajya Sabha on Tuesday that the decision to divest a stake in Air India was based on government think-tank NITI Aayogs recommendations in May this year.

In its recommendations, the Aayog had given the rationale for the disinvestment of Air India and has attributed the main reason as fragile finances of the company. AI has been incurring continuous losses and has huge accumulated losses, Mr. Sinha said in a written reply.

Further, NITI Aayog in its report on Air India says that further support to an unviable non-priority company in a matured and competitive aviation sector would not be the best use of scarce financial resources of the Government, Mr. Sinha added.

Mr. Sinha said in the Lok Sabha on Thursday that Air Indias market share on domestic routes has reduced from 17.9% in 2014-15 to 14.2% in 2016-17.

Air India has accumulated total debt of 48,876 crore till March 2017. The national carrier has been reporting continuous losses due to its high debt with its net loss at 3,728 crore in 2016-17 compared with 3,836 crore in 2015-16.

Hours after the Union Cabinet gave its nod to Air India's strategic disinvestment, Indias largest low-cost carrier IndiGo expressed interest in acquiring a stake in its airline business, mainly related to its international operations. Tata Sons were also reportedly in talks with the government seeking details on the national carriers strategic disinvestment.

Continued here:

AI sale plan details sought from Centre - The Hindu

Google Researchers Create AI-ception with an AI Chip That Speeds Up AI – Interesting Engineering

Reinforcement learning algorithms may be the next best thing since sliced bread for engineers looking to improve chip placement.

Researchers from Google have created a new algorithm that has learned how to optimize the placement of the components in a computer chip, so as to make it more efficient and less power-hungry.

SEE ALSO: WILL AI AND GENERATIVE DESIGN STEAL OUR ENGINEERING JOBS?

Typically, engineers can spend up to 30 hours configuring a single floor plan of chip placement, or chip floor planning. This complicated 3D design problem requires the configuration of hundreds, or even thousands, of components across a number of layers in a constrained area. Engineers will manually design configurations to minimize the number of wires used between components as a proxy for efficiency.

Because this is time-consuming, these chips are designed to only last between two and five years. However, as machine-learning algorithms keep improving year upon year, a need for new chip architectures has also arisen.

Facing these challenges, Google researchers Anna Goldie and Azalia Mirhoseini, have looked into reinforcement learning. These types of algorithms use positive and negative feedback in order to learn new and complicated tasks. Thus, the algorithm is either "rewarded" or "punished" depending on how well it learns a task. Following this, it then creates tens to hundreds of thousands of new designs. Ultimately, it creates an optimal strategy on how to place these chip components.

After their tests, the researchers checked their designs with the electronic design automation software and discovered that their method's floor planning was much more effective than the ones human engineers designed. Moreover, the system was able to teach its human workers a new trick or two.

Progress in AI has been largely interlinked with progress is computer chip design. The researchers' hope is that their new algorithm will assist in speeding up the chip design process and pave the way for new and improved architectures, which would ultimately accelerate AI.

The rest is here:

Google Researchers Create AI-ception with an AI Chip That Speeds Up AI - Interesting Engineering

John Lennox: 2084 and AI – mapping out the territory – The Irish News

WE humans are insatiably curious. We have been asking questions since the dawn of history. We've especially been asking the big questions about origin and destiny: Where do I come from and where am I going?

Their importance is obvious. Our answer to the first shapes our concepts of who we are, and our answer to the second gives us goals to live for.

Taken together, our responses to these questions help frame our worldview, the narrative that gives our lives their meaning.

The problem is that these are not easy questions, as we see from the fact that many and contradictory answers are on offer.

Yet, by and large, we have not let that hinder us. Over the centuries, humans have proposed some answers given by science, some by philosophy, some based on religion, others on politics, etc.

Two of the most famous futuristic scenarios are the 1931 novel Brave New World by Aldous Huxley and George Orwell's novel 1984, published in 1949.

Both of them have, at various times, been given very high ranking as influential English novels. For instance, Orwell's was chosen in 2005 by Time magazine as one of the 100 best English-language novels from 1923 to 2005.

Both novels are dystopian: that is, according to the Oxford English Dictionary, "they describe an imaginary place or condition that is as bad as possible".

However, the really bad places that they describe are very different, and their differences, which give us helpful insights that will be useful to us later, were succinctly explained by sociologist Neil Postman in his highly regarded work Amusing Ourselves to Death: "Orwell warns that we will be overcome by an externally imposed oppression.

"But in Huxley's vision, no Big Brother is required to deprive people of their autonomy, maturity and history. As he saw it, people will come to love their oppression, to adore the technologies that undo their capacities to think.

"What Orwell feared were those who would ban books. What Huxley feared was there would be no reason to ban a book, for there would be no-one who wanted to read one.

"Orwell feared those who would deprive us of information. Huxley feared those who would give us so much that we would be reduced to passivity and egoism.

"Orwell feared that the truth would be concealed from us. Huxley feared that the truth would be drowned in a sea of irrelevance.

"Orwell feared we would become a captive culture. Huxley feared we would become a trivial culture... In short, Orwell feared that what we hate will ruin us. Huxley feared that what we love will ruin us."

Orwell introduced ideas of blanket surveillance in a totalitarian state, of "thought control" and "newspeak", ideas that nowadays increasingly come up in connection with developments in artificial intelligence (AI), particularly the attempt to build computer technology that can do the sorts of things that a human mind can do - in short, the production of an imitation mind.

Billions of dollars are now being invested in the development of AI systems, and not surprisingly, there is a great deal of interest in where it is all going to lead...

Read the rest here:

John Lennox: 2084 and AI - mapping out the territory - The Irish News

BSC Participates in State Observatory Initiative Leveraging Big Data and AI That Aims to Detect, Prevent Epidemics – HPCwire

July 17, 2020 Catalonia is the first State Epidemiological Observatory that will use Big Data and artificial intelligence techniques to generate a new collection of innovative epidemiological models for public health institutions that help them prevent, detect early, and mitigate the spread of epidemics.

This public-private initiative, which is part of the Catalonia.AI strategy, joins the efforts of the Catalan Government, medical and health institutions (Germans Trias i Pujol Hospital and Fundacin Lucha contra el Sida), leading technological research centres (BSC, CIDA, Eurecat, URV and CSIC), mobile phone operators (Telefnica, Orange and GSMA) and Mobile World Capital Barcelona.

The Barcelona Supercomputing Center (BSC) is one of the participating centres and its task will be to collaborate in the development of a pandemic model for future prevention, including all data sources, and also in the storage of data, computing, health data management and meteorological data computing

In the BSC Life Science department, an integrated geographic information system is being developed that includes COVID-19 case data, hospital situations, population data, weather data, and mobility patterns between regions.

Given the heterogeneity and complexity of the data, tools are being developed for the analysis and visualization of information based on the analysis of complex networks and time series. In a complementary way and in collaboration with the group led by Dr. Prof. Alex Arenas of the Rovira i Virgili University, the information system is being used to calibrate and validate predictive epidemiological models.

The objective of the initiative is to develop and provide an integrated information system that allows, on the one hand, to generate periodic reports to monitor the health situation. And, on the other hand, it is oriented to the development and application of epidemiological models as a tool to assist decision-making by health authorities.

Big Data for the prevention of epidemics

The creation of the Epidemiological Observatory consists of two phases (corresponding to the years 2020 and 2021, respectively), the first of which will consist of creating and analyzing a mathematical model to compare and predict specific patterns of epidemics, from influenza and COVID-19. This is the objective of the Observatorys first research project, Big Data for the prevention of epidemics, which will apply Big Data technology and artificial intelligence to clinical data, mobile phone data, census data and meteorological data. The treatment of all the data will be done at all times in anonymized form and in no case will it imply traceability of the users. The first results of this phase can be seen in the fall.

This project aims, on the one hand, a massive improvement in the model of pandemic spread thanks to the inclusion of clinical, mobile, census and climatological data, and on the other hand, to provide public health organizations with a support system in decision-making based on innovative epidemiological models that allow them to anticipate and draw up a plan to face epidemics, as well as improve the management of public resources in areas such as the health system, mobility, education, etc., adapting them to real needs.

Treatment of data and privacy of people

One of the priorities of this Observatory is to define and implement a data processing model that fully guarantees the privacy of individuals. In this sense, in order to study the transmission of the virus, we will work with aggregate mobility data, and not individual data, following the recommendations and best practices of the European Commission regarding the use of mobile data to combat COVID -19.

The budget associated with the Observatory is 600,000 for the two planned years of the project, to which must be added the costs of transferring data from mobile operators in phase 2 (in phase 1 they are transferred free of charge). The Catalan Government will finance 50% of the project and the rest will be provided by participating partners and external funds through competitive calls.

The Observatory will be located at the Germans Trias i Pujol Hospital (HGTiP) and will have the scientific coordination of Dr. Bonaventura Clotet, head of the HGTiP Infectious Diseases Service and president of the Fight Against AIDS Foundation (FLS).

About BSC

Barcelona Supercomputing Center-Centro Nacional de Supercomputacin (BSC-CNS) is the national supercomputing centre in Spain. The center is specialised in high performance computing (HPC) and manage MareNostrum, one of the most powerful supercomputers in Europe, located in the Torre Girona chapel. BSC is involved in a number of projects to design and develop energy efficient and high performance chips, based on open architectures like RISC-V, for use within future exascale supercomputers and other high performance domains. The centre leads the pillar of the European Processor Project (EPI), creating a high performance accelerator based on RISC-V. More information:www.bsc.es

Source: Barcelona Supercomputing Center

See more here:

BSC Participates in State Observatory Initiative Leveraging Big Data and AI That Aims to Detect, Prevent Epidemics - HPCwire

WIMI Holographic AR+AI Vision Drives a New Wave of 5G Applications – Yahoo Finance

NEW YORK, NY / ACCESSWIRE / July 10, 2020 / As a leader in holographic vision, WIMI Hologram Cloud (WIMI) specializes in computer vision holographic cloud services. WIMI cover from the holographic AI computer vision synthesis, holographic visual presentation, holographic interactive software development and holographic AR online and offline advertising, holographic ARSDK pay, 5 g holographic communication software development, holographic face recognition and development, holographic AI in face of the development of the technology such as holographic AR multiple links, holographic cloud is a comprehensive technology solutions provider. Its business application scene is mainly gathered in five professional fields, such as home entertainment, lightfield cinema, performing arts system, commercial release system and advertising display system.

With the change of 5G holographic communication network bandwidth conditions, 5G holographic application market will usher in the explosion, holographic interactive entertainment, holographic conference, holographic conference and other high-end applications are gradually popularized to holographic social networking, holographic communication, holographic navigation, holographic family applications and other directions. WIMI plans to use holographic AI face recognition technology and holographic AI face changing technology as the core technologies to support holographic cloud platform services and 5G communication holographic applications with multiple innovative systems.

The world Artificial Intelligence Conference (WAIC) cloud Summit 2020 was held in Shanghai. Affected by the COVID-19 epidemic, this year WAIC carried out the "Cloud Exhibition" for the first time, including human holographic projection and real-time 3D cloud guest experience. It is noteworthy that the world conference on Artificial Intelligence under the epidemic, the participation of Internet leaders in different forms. Jack Ma, founder of Alibaba, and Elon Musk, co-founder and CEO of Tesla, held a conference via video link via holographic projection.

Ma talked about the epidemic gave him three points of thinking, the first point is that mankind cannot leave the earth, but the earth can leave mankind.

Ma said the epidemic has made us understand our strangeness to the earth. The earth might be better off away from us. Because of the outbreak, deer in Nara, Japan, are eating fewer snacks and becoming healthier.

Holography is not the latest technology, dating back to the 1960s. Holographic imaging technology USES the principle of interference and diffraction to record and reproduce the real image of the object, which requires more than 100 times more information than ordinary camera processing, which puts forward high requirements for the shooting, processing and transmission platform. Therefore, the earliest holographic technology is only used to process static photos. To realize ultra-low delay transmission of HD images, holographic imaging is expected to be commercialized in 5G.

Apple is also seen by some as critical to the future of augmented reality, despite limited traction for ARKit so far and its absence from smartglasses (again so far). Yet Facebook, Microsoft and others are arguably more important to where the market is today. While there are more AR platforms than just these companies, they represent the top of the pyramid for three different types of AR roadmap. And while startup insurgents could make a huge difference, big platforms can exert disproportionate influence on the future of tech markets. Facebook has talked about its long term potential to launch smartglasses, but in 2020 its primary presence in the AR market is as a mobile AR platform (note:Facebook is also a VR market leader with Oculus). Although there are other ways to define them, mobile AR platforms can be thought of as three broad types:

Story continues

Mobile AR software's installed base and commercial dynamics look like a variant of mobile, which plays to Facebook's strengths. Advertising could be mobile AR's biggest revenue stream both short and long term, making it critical to an advertising driven company like Facebook. It's worth noting that a lot of this adspend is going towards traditional ad units viewed around user generated mobile AR content (i.e. filters and lenses on messaging platforms), rather than just mobile AR ad units. This does not mean that sponsored mobile AR filters and lenses are not a significant part of the mix going forward.

As Digi-Capital has said since 2016, only Tim Cook and his inner circle really know what Apple is going to do in AR before they do it. This was proven in 2017, when Apple caught many (including us) by surprise with the launch of ARKit. The same thing happened in 2019, when Apple added a triple camera system to the back of the iPhone 11 Pro, instead of the rear facing depth sensor we had anticipated. So where in 2017 we fundamentally revised our forecasts post-ARKit, in 2020 we've done the same thing based on a revised view of Apple's potential roadmap.

As an emerging industry, the global holographic AR market has great growth potential and has attracted a lot of investment since 2016, making a great contribution to the growth of the industry. Several organizations, including RESEARCH and development, are investing heavily in the technology to develop solutions for businesses and consumer groups. Over the years, holographic augmented reality has been widely used in games, media and marketing. Its growing use in different sectors such as advertising, entertainment, education and retail is expected to drive demand during the forecast period.

Global Holographic AR market size by revenue in 2016,-2025

WIMI Hologram Cloud (WIMI) builds a real-time modeling system of multi-angle shooting: full-dimensional image scanning is performed on the collected objects, which is synthesized into a three-dimensional model in real time. Six-degree matrix optical field system: The imaging field of holographic virtual image is constructed by the comprehensive application of multiple light sources. Binocular parallax intelligent enhancement system: dynamically tracks the object trajectory and adjusts the light during the acquisition process to maintain the balanced value of binocular disparity. Multi-image dynamic fusion system: multi-dimensional image wide-angle acquisition technology in narrow space, applied to cloud vision miniaturized holographic warehouse. Holographic image high-speed processing algorithm: image information processing speed, and ensure the rendering effect, processing speed up to 10GB/ s. Stealth polyester optical imaging film: the key component of holographic imaging, so that the holographic image perfect imaging display. Holographic virtual figure painting sound reconstruction technology: the use of human bone dynamic capture, real-time image rendering, speech recognition technology, sound simulation technology to present the virtual human. Holographic cloud platform: An interactive platform with data storage, image restoration and holographic social properties covering the whole country. WIMI builds a complete 5G holographic communication application platform through the above system combination to support various online terminals and personal devices, and meanwhile expands various mainstream 5G holographic applications such as holographic social communication, holographic family interaction, holographic star interaction, holographic online education and holographic online conference.

After 5G landing, the first scene application will accelerate the DEVELOPMENT of VR/AR, and the growth rate of The Chinese market will be higher than that of the world. Therefore, with 5G blessing, the communication and transmission shortboards of VR/AR and other immersive game scenes will be made up, and it is expected that the commercial use of VR/AR of immersive games will be accelerated. According to the research of China Academy of Information and Communications Technology, the global VIRTUAL reality industry scale is close to 100 billion yuan, and the average annual compound growth rate from 2017 to 2022 is expected to exceed 70%. According to Greenlight, the global virtual reality industry scale will exceed 200 billion yuan in 2020, including 160 billion yuan in VR market and 45 billion yuan in AR market. In addition, for the Chinese market, according to IDC's latest "IDC Global Expenditure Guide for Augmented And Virtual Reality", the expenditure scale of AR/VR market in China will reach us $65.21 billion by 2023, a significant increase from the forecast of us $6.53 billion in 2019. Meanwhile, the CAGR for 2018-2023 will reach 84.6%, higher than the global market growth rate of 78.3%.

Holographic cloud business will combine with the depth of 5 g, 5 g in collaboration of high rate and low latency, remote communication and data transmission, from terminal to business server system transmission delay will average about 6 ms, well below the 4 g network transmission delay, ensure the holographic AR in the remote communication and data transmission without caton, low latency, and terminal collaboration in more, when the richness and diversity of interaction. Make the end + cloud collaboration collaboration more efficient. Enhanced mobile broadband (eMBB) and the Internet of things (IoT) application, makes beautiful micro holographic cloud of holographic holographic AR advertising business, and entertainment business, as well as the holographic interactive entertainment, holographic meeting, holographic social, holographic communication, holographic family holographic, etc., will be based on facial recognition technology and holographic 5 g + AI AI face in face of technology of the core technology for effective growth.

Due to the changes in 5G communication network bandwidth, high-end holographic applications are increasingly applied to social media, communication, navigation, home applications and other application scenarios. WIMI's plan is to provide holographic cloud platform services based on two core technologies: holographic ARTIFICIAL intelligence facial recognition technology and holographic ARTIFICIAL intelligence facial modification technology through 5G communication networks.

WIMI Team started from the agricultural civilization and labor to get rich, meet the demand of mankind itself production and development, to use tools, step by step into the era of industrial civilization development, WIMI Team through a variety of machines to help us solve the problem of life and production, and the development of artificial intelligence, mean that the future will be more human creativity and imagination, handed the artificial intelligence to solve the more labor to make artificial intelligence service.

Nowadays, with the development of the Internet era, the smart life scenes WIMI Team used to see in movies and TV shows are also gradually appearing in real family life. The development of Internet platforms and technology companies has brought us closer and closer to the smart era. Smart devices and smart home are just the beginning.

In the field of ARTIFICIAL intelligence, there have been a lot of investments in recent years, amounting to tens of billions of dollars. There have been many companies in the AI industry, such as WIMI, Alibaba's Dharma Institute, Huawei's 5G and various basic research platforms, all of which are exploring the business opportunities of AI in the future. Technology is making life better and closer to reality.

Media Contact:

Company: WIMIName: Tim WongTele: +86 10 89913328Email: bjoverseasnews@gmail.com

SOURCE: WIMI

View source version on accesswire.com: https://www.accesswire.com/597017/WIMI-Holographic-ARAI-Vision-Drives-a-New-Wave-of-5G-Applications

Original post:

WIMI Holographic AR+AI Vision Drives a New Wave of 5G Applications - Yahoo Finance

Is AI More Threatening Than North Korean Missiles? – NPR

In this April 30, 2015, file photo, Tesla Motors CEO Elon Musk unveils the company's newest products, in Hawthorne, Calif. Ringo H.W. Chiu/AP hide caption

In this April 30, 2015, file photo, Tesla Motors CEO Elon Musk unveils the company's newest products, in Hawthorne, Calif.

One of Tesla CEO Elon Musk's companies, the nonprofit start-up OpenAI, manufactures a device that last week was victorious in defeating some of the world's top gamers in an international video game (e-sport) tournament with a multi-million-dollar pot of prize money.

We're getting very good, it seems, at making machines that can outplay us at our favorite pastimes. Machines dominate Go, Jeopardy, Chess and as of now at least some video games.

Instead of crowing over the win, though, Musk is sounding the alarm. Artificial Intelligence, or AI, he argued last week, poses a far greater risk to us now than even North Korean warheads.

No doubt Musk's latest pronouncements make for good advertising copy. What better way to drum up interest in a product than to announce that, well, it has the power to destroy the world.

But is it true? Is AI a greater threat to mankind than the threat posed to us today by an openly hostile, well-armed and manifestly unstable enemy?

AI means, at least, three things.

First, it means machines that are faster, stronger and smarter than us, machines that may one day soon, HAL-like, come to make their own decisions and make up their own values and, so, even to rule over us, just as we rule over the cows. This is a very scary thought, not the least when you consider how we have ruled over the cows.

Second, AI means really good machines for doing stuff. I used to have a coffee machine that I'd set with a timer before going to bed; in the morning I'd wake up to the smell of fresh coffee. My coffee maker was a smart, or at least smart-ish, device. Most of the smart technologies, the AIs, in our phones, and airplanes, and cars, and software programs including the ones winning tournaments are pretty much like this. Only more so. They are vastly more complicated and reliable but they are, finally, only smart-ish. The fact that some of these new systems "learn," and that they come to be able to do things that their makers cannot do like win at Go or Dota is really beside the point. A steam hammer can do what John Henry can't but, in the end, the steam hammer doesn't really do anything.

Third, AI is a research program. I don't mean a program in high-tech engineering. I mean, rather, a program investigating the nature of the mind itself. In 1950, the great mathematician Alan Turing published a paper in a philosophy journal in which he argued that by the year 2000 we would find it entirely natural to speak of machines as intelligent. But more significantly, working as a mathematician, he had devised a formal system for investigating the nature of computation that showed, as philosopher Daniel Dennett puts it in his recent book, that you can get competence (the ability to solve problems) without comprehension (by merely following blind rules mechanically). It was not long before philosopher Hilary Putnam would hypothesize the mind is a Turing Machine (and a Turing Machine just is, for all intents and purposes, what we call a computer today). And, thus, the circle closes. To study computational minds is to study our minds, and to build an AI is, finally, to try to reverse engineer ourselves.

Now, Type 3 AI, this research program, is alive and well and a continuing chapter in our intellectual history that is of genuine excitement and importance. This, even though the original hypothesis of Putnam is wildly implausible (and was given up by Putnam decades ago). To give just one example: the problem of the inputs and the outputs. A Turing Machine works by performing operations on inputs. For example, it might erase a 1 on a cell of its tape and replace it with a 0. The whole method depends on being able to give a formal specification of a finite number of inputs and outputs. We can see how that goes for 1s and 0s. But what are the inputs, and what are the outputs, for a living animal, let alone a human being? Can we give a finite list, and specify its items in formal terms, of everything we can perceive, let alone, do?

And there are other problems, too. To mention only one: We don't understand how the brain works. And this means that we don't know that the brain functions, in any sense other than metaphorical, like a computer.

Type 1 AI, the nightmare of machine dominance, is just that, a nightmare, or maybe (for the capitalists making the gizmos) a fantasy. Depending on what we learn pursuing the philosophy of AI, and as luminaries like John Searle and the late Hubert Dreyfus have long argued, it may be an impossible fiction.

Whatever our view on this, there can be no doubt that the advent of smart, rather than smart-ish, machines, the sort of machines that might actually do something intelligent on their own initiative, is a long way off. Centuries off. The threat of nuclear war with North Korea is both more likely and more immediate than this.

Which does not mean, though, that there is not in fact real cause for alarm posed by AI. But if so, we need to turn our attention to Type 2 AI: the smart-ish technologies that are everywhere in our world today. The danger here is not posed by the technologies themselves. They aren't out to get us. They are not going to be out to get us any time soon. The danger, rather, is our increasing dependence on them. We have created a technosphere in which we are beholden to technologies and processes that we do not understand. I don't mean you and me, that we don't understand: No one person can understand. It's all gotten too complicated. It takes a whole team or maybe a university to understand adequately all the mechanisms, for example, that enable air traffic control, or drug manufacture, or the successful production and maintenance of satellites, or the electricity grid, not to mention your car.

Now this is not a bad thing in itself. We are not isolated individuals all alone and we never have been. We are a social animal and it is fine and good that we should depend on each other and on our collective.

But are we rising to the occasion? Are we tending our collective? Are we educating our children and organizing our means of production to keep ourselves safe and self-reliant and moving forward? Are we taking on the challenges that, to some degree, are of our own making? How to feed 7 billion people in a rapidly warming world?

Or have we settled? Too many of us, I fear, have taken up a "user" attitude to the gear of our world. We are passive consumers. Like the child who thinks chickens come from supermarkets, we are hopelessly alienated from how things work.

And if we are, then what are we going to do if some clever young person some where maybe a young lady in North Korea writes a program to turn things off? This is a serious and immediate pressing danger.

Alva No is a philosopher at the University of California, Berkeley, where he writes and teaches about perception, consciousness and art. He is the author of several books, including his latest, Strange Tools: Art and Human Nature (Farrar, Straus and Giroux, 2015). You can keep up with more of what Alva is thinking on Facebook and on Twitter: @alvanoe

See the article here:

Is AI More Threatening Than North Korean Missiles? - NPR

Analytics Insight Magazine Names ‘The 10 Most Innovative Global AI Executives’ – Business Wire

SAN JOSE, Calif. & HYDERABAD, India--(BUSINESS WIRE)--Analytics Insight Magazine, a brand of Stravium Intelligence has named The 10 Most Innovative Global AI Executives in its January issue.

The magazine issue features ten seasoned disruptors who have significantly contributed towards the AI-driven transformation of their respective organizations and industries. These foresighted innovators are driving the next-generation of intelligent offerings across current business landscapes globally. Here are the AI Executives who made the list:

Featuring as the Cover Story is Gary Fowler, who serves as the CEO, President, and Co-founder of GSD Venture Studios. Previously, he co-founded top CIS accelerators GVA and SKOLKOVO Startup Academy where majority of these companies achieved success soon after their launch. Most recently, Gary co-founded Yva.ai with David Yang, one of Russias most famous entrepreneurs.

The issue further includes:

Lane Mendelsohn: President of Vantagepoint AI, Lane is an experienced executive with a demonstrated history of working in the computer software industry. He is skilled in Business Planning, Analytical Skills, Sales, Enterprise Software, and E-commerce.

Chethan KR: Chethan is the CEO of SynctacticAI. He is an entrepreneur with 13+yrs of experience in the IT and software development industry. He looks at setting the vision for his companys platform, deriving growth strategies and establishing partnerships with industries.

Christopher Rudolf: Christopher is the Founder and CEO of Volv Global and has over 30 years of experience, as a technology entrepreneur and business advisor, working with many blue-chip organisations to solve their critical global scale data problems.

Kalyan Sridhar: Kalyan Sridhar is the Managing Director at PTC, responsible for managing its operations in India, Sri Lanka and Bangladesh. He has 28 years of experience in senior executive roles spanning Sales, Business Development, Business Operations and Channel Sales in the IT industry.

Kashyap Kompella: Kashyap serves as the CEO and Chief Analyst of rpa2ai Research, and has 20 years of experience as an Industry Analyst, Hands-on Technologist, Management Consultant and M&A Advisor to leading companies and startups across sectors.

Kumardev Chatterjee: Kumardev is the Co-founder and CEO of Unmanned Life and also serves as Founder and Chairman of the European Young Innovators Forum. He holds an MSc. in Computer Science from University College London.

Niladri Dutta: Niladri, CEO at Tardid Technologies, has a background in protocol stacks, large transactional applications, and analytics. He ensures that the company has a long-term strategy in place that keeps the team and customers excited all the time.

Sarath SSVS: Sarath serves as the CEO and Founder of two AI-driven companies, SeeknShop.IO and IntentBI. He is a seasoned innovator with over 13 years of experience in machine learning, data science, and product management.

Prithvijit Roy: Prithvijit is the Founder and CEO of BRIDGEi2i Analytics Solutions. His specialties are business analytics, big data, data mining, shared services, knowledge process outsourcing (KPO), analytics consulting services, managed analytics services, and significant others.

The disruptive wave of AI has made some significant impacts across multiple industries. AI breakthroughs have even influenced the vision and practices of top industry executives and pushed them towards becoming innovative AI pioneers of their own space. Ushering in the new era, more and more leaders are spurring the innovation and spearhead the transformation journey to translate their revamped vision into best AI practices.

Read the detailed coverage here. For more information, please visit https://www.analyticsinsight.net.

About Analytics Insight

Analytics Insight is an influential platform dedicated to insights, trends, and opinions from the world of data-driven technologies. It monitors developments, recognition, and achievements made by AI, big data and analytics companies across the globe. The Analytics Insight Magazine features opinions and views from top leaders and executives in the industry who share their journey, experiences, success stories, and knowledge to grow profitable businesses.

To set up an interview or advertise your brand, contact info@analyticsinsight.net.

Go here to read the rest:

Analytics Insight Magazine Names 'The 10 Most Innovative Global AI Executives' - Business Wire

Service that uses AI to identify gender based on names looks incredibly biased – The Verge

Some tech companies make a splash when they launch, others seem to bellyflop.

Genderify, a new service that promised to identify someones gender by analyzing their name, email address, or username with the help AI, looks firmly to be in the latter camp. The company launched on Product Hunt last week, but picked up a lot of attention on social media as users discovered biases and inaccuracies in its algorithms.

Type the name Meghan Smith into Genderify, for example, and the service offers the assessment: Male: 39.60%, Female: 60.40%. Change that name to Dr. Meghan Smith, however, and the assessment changes to: Male: 75.90%, Female: 24.10%. Other names prefixed with Dr produce similar results while inputs seem to generally skew male. Test@test.com is said to be 96.90 percent male, for example, while Mrs Joan smith is 94.10 percent male.

The outcry against the service has been so great that Genderify tells The Verge its shutting down altogether. If the community dont want it, maybe it was fair, said a representative via email. Genderify.com has been taken offline and its free API is no longer accessible.

Although these sorts of biases appear regularly in machine learning systems, the thoughtlessness of Genderify seems to have surprised many experts in the field. The response from Meredith Whittaker, co-founder of the AI Now Institute, which studies the impact of AI on society, was somewhat typical. Are we being trolled? she asked. Is this a psyop meant to distract the tech+justice world? Is it cringey tech April fools day already?

The problem is not that Genderify made assumptions about someones gender based on their name. People do this all the time, and sometimes make mistakes in the process. Thats why its polite to find out how people self-identify and how they want to be addressed. The problem with Genderify is that it automated these assumptions; applying them at scale while sorting individuals into a male/female binary (and so ignoring individuals who identify as non-binary) while reinforcing gender stereotypes in the process (such as: if youre a doctor youre probably a man).

The potential harm of this depends on how and where Genderify was applied. If the service was integrated into a medical chatbot, for example, its assumptions about users genders might have led to the chatbot issuing misleading medical advice.

Thankfully, Genderify didnt seem to be aiming to automate this sort of system, but was primarily designed to be a marketing tool. As Genderifys creator, Arevik Gasparyan, said on Product Hunt: Genderify can obtain data that will help you with analytics, enhancing your customer data, segmenting your marketing database, demographic statistics, etc.

In the same comment section, Gasparyan acknowledged the concerns of some users about bias and ignoring non-binary individuals, but didnt offer any concrete answers.

One user asked: Lets say I choose to identify as neither Male or Female, how do you approach this? How do you avoid gender discrimination? How are you tackling gender bias? To which Gasparyan replied that the service makes its decisions based on already existing binary name/gender databases, and that the company was actively looking into ways of improving the experience for transgender and non-binary visitors by separating the concepts of name/username/email from gender identity. Its a confusing answer given that the entire premise of Genderify is that this data is a reliable proxy for gender identity.

The company told The Verge that the service was very similar to existing companies who use databases of names to guess an individuals gender, though none of them use AI.

We understand that our model will never provide ideal results, and the algorithm needs significant improvements, but our goal was to build a self-learning AI that will not be biased as any existing solutions, said a representative via email. And to make it work, we very much relied on the feedback of transgender and non-binary visitors to help us improve our gender detection algorithms as best as possible for the LGBTQ+ community.

Update Wednesday July 29, 12:42PM ET: Story has been updated to confirm that Genderify has been shut down and to add additional comment from a representative of the firm.

Read more:

Service that uses AI to identify gender based on names looks incredibly biased - The Verge

Can a Crowdsourced AI Medical Diagnosis App Outperform Your Doctor? – Scientific American

Shantanu Nundy recognized the symptoms of rheumatoid arthritis when his 31-year-old patient suffering from crippling hand pain checked into Marys Center in Washington, D.C. Instead of immediately starting treatment, though, Nundy decided first to double-check his diagnosis using a smartphone app that helps with difficult medical cases by soliciting advice from doctors worldwide. Within a day, Nundys hunch was confirmed. The app had used artificial intelligence (AI) to analyze and filter advice from several medical specialists into an overall ranking of the most likely diagnoses. Created by the Human Diagnosis Project (Human Dx)an organization that Nundy directsthe app is one of the latest examples of growing interest in humanAI collaboration to improve health care.

Human Dx advocates the use of machine learninga popular AI technique that automatically learns from classifying patterns in datato crowdsource and build on the best medical knowledge from thousands of physicians across 70 countries. Physicians at several major medical research centers have shown early interest in the app. Human Dx on Thursday announced a new partnership with top medical profession organizations including the American Medical Association and the Association of American Medical Colleges to promote and scale up Human Dxs system. The goal is to provide timely and affordable specialist advice to general practitioners serving millions of people worldwide, in particular so-called "safety net" hospitals and clinics throughout the U.S. that offer access to care regardless of a patients ability to pay.

We need to find solutions that scale the capacity of existing doctors to serve more patients at the same or cheaper cost, says Jay Komarneni, founder and chair of Human Dx. Roughly 30 million uninsured Americans rely on safety net facilities, which generally have limited or no access to medical specialists. Those patients often face the stark choice of either paying out of pocket for an expensive in-person consultation or waiting for months to be seen by the few specialists working at public hospitals, which receive government funding to help pay for patient care, Komarneni says. Meanwhile studies have shown that between 25 percent and 30 percent (pdf) of such expensive specialist visits could be conducted by online consultations between physicians while sparing patients the additional costs or long wait times.

Komarneni envisions augmenting or extending physician capacity with AI to close this specialist gap. Within five years Human Dx aims to become available to all 1,300 safety net community health centers and free clinics in the U.S. The same remote consultation services could also be made available to millions of people around the world who lack access to medical specialists, Komarneni says.

When a physican needs help diagnosing or treating a patient they open the Human Dx smartphone app or visit the projects Web page and type in their clinical question as well as their working diagnosis. The physician can also upload images and test results related to the case and add details such as any medication the patient takes regularly. The physician then requests help, either from specific colleagues or the network of doctors who have joined the Human Dx community. Over the next day or so Human Dxs AI program aggregates all of the responses into a single report. It is the new digital equivalent of a curbside consult where a physician might ask a friend or colleague for quick input on a medical case without setting up a formal, expensive consultation, says Ateev Mehrotra, an associate professor of health care policy and medicine at Harvard Medical School and a physician at Beth Israel Deaconess Medical Center. It makes intuitive sense that [crowdsourced advice] would be better advice, he says, but how much better is an open scientific question. Still, he adds, I think its also important to acknowledge that physician diagnostic errors are fairly common. One of Mehrotra's Harvard colleagues has been studying how the AI-boosted Human Dx system performs in comparison with individual medical specialists, but has yet to publish the results.

Mehrotra's cautionary note comes from research that he and Nundy published last year in JAMA Internal Medicine. That study used the Human Dx service as a neutral platform to compare the diagnostic accuracy of human physicians with third-party symptom checker Web sites and apps used by patients for self-diagnosis. In this case, the humans handily outperformed the symptom checkers computer algorithms. But even physicians provided incorrect diagnoses about 15 percent of the time, which is comparable with past estimates of physician diagnostic error.

Human Dx could eventually help improve the medical education and training of human physicians, says Sanjay Desai, a physician and director of the Osler Medical Training Program at Johns Hopkins University. As a first step in checking the service's capabilities, he and his colleagues ran a study where the preliminary results showed the app could tell the difference between the diagnostic abilities of medical residents and fully trained physicians. Desai wants to see the service become a system that could track the clinical performance of individual physicians and provide targeted recommendations for improving specific skills. Such objective assessments could be an improvement over the current method of human physicians qualitatively judging their less experienced colleagues. The open question, Desai says, is whether the algorithms can be created to provide finer insights into an [individual] doctors strengths and weaknesses in clinical reasoning.

Human Dx is one of many AI systems being tested in health care. The IBM Watson Health unit is perhaps the most prominent, with the company for the past several years claiming that its AI is assisting major medical centers and hospitals in tasks such as genetically sequencing brain tumors and matching cancer patients to clinical trials. Studies have shown AI can help predict which patients will suffer from heart attacks or strokes in 10 years or even forecast which will die within five. Tech giants such as Google have joined start-ups in developing AI that can diagnose cancer from medical images. Still, AI in medicine is in its early days and its true value remains to be seen. Watson appears to have been a success at Memorial Sloan Kettering Cancer Center, yet it floundered at The University of Texas M. D. Anderson Cancer Center, although it is unclear whether the problems resulted from the technology or its implementation and management.

The Human Dx Project also faces questions in achieving widespread adoption, according to Mehrotra and Desai. One prominent challenge involves getting enough physicians to volunteer their time and free labor to meet the potential rise in demand for remote consultations. Another possible issue is how Human Dx's AI quality control will address users who consistently deliver wildly incorrect diagnoses. The service will also require a sizable user base of medical specialists to help solve those trickier cases where general physicians may be at a loss.

In any case, the Human Dx leaders and the physicians helping to validate the platform's usefulness seem to agree that AI alone will not take over medical care in the near future. Instead, Human Dx seeks to harness both machine learning and the crowdsourced wisdom of human physicians to make the most of limited medical resources, even as the demands for medical care continue to rise. The complexity of practicing medicine in real life will require both humans and machines to solve problems, Komarneni says, as opposed to pure machine learning.

See the rest here:

Can a Crowdsourced AI Medical Diagnosis App Outperform Your Doctor? - Scientific American

Greta Thunberg Says UN Climate Conference Is a Scam and She’s Not Attending

The UN's upcoming COP27 climate conference in Egypt is basically a

COP Out

Ever since she lambasted world leaders at a UN conference in 2018 when she was only 15 years old, Swedish environmental activist Greta Thunberg has had the ear of the international community.

Now, Thunberg says she's skipping out on next week's COP27 UN climate summit in Egypt. Why? Because it's rife with "greenwashing."

"I'm not going to COP27 for many reasons, but the space for civil society this year is extremely limited," Thunberg said at a press event for her book, "The Climate Book," as quoted by The Guardian. "The COPs are mainly used as an opportunity for leaders and people in power to get attention, using many different kinds of greenwashing."

Ultimately, in Thunberg's view, the COP conferences "are not really meant to change the whole system" and instead only promote incremental change. Bluntly put, they're feel-good events that don't accomplish much, so she's bowing out.

Wasted Breath

It's not an unfair assessment. For all the pledges made to drastically cut back emissions and achieve net carbon zero by 2050, very few nations have followed through in the short term. And in Europe, the energy crisis in the wake of the war in Ukraine has further sidelined those climate commitments.

So we can't blame her for not going. But it's a bit disheartening that even a tenacious young spokesperson like Thunberg has given up on convincing world leaders at the biggest climate summit in the world.

Maybe it's indicative of the frustrations of her generation at large. When Thunberg was asked what she thought about the recent wave of Just Stop Oil protests that included activists throwing soup on a Van Gogh painting, she said that she viewed what many detractors perceived as a dumb stunt to be symptomatic of the world's failure to effect meaningful environmental change.

"People are trying to find new methods because we realize that what we have been doing up until now has not done the trick," she replied, as quoted by Reuters. "It's only reasonable to expect these kinds of different actions."

Maybe the real question is: if even a UN climate conference isn't the place to get the message out and change hearts, where's the right place, and what's the right way? If the headlines are any indication, zoomers are struggling to figure that out.

More on Greta Thunberg: Greta Thunberg Thinks Germany Shutting Down Its Nuclear Plants Is a Bad Idea

The post Greta Thunberg Says UN Climate Conference Is a Scam and She's Not Attending appeared first on Futurism.

View post:

Greta Thunberg Says UN Climate Conference Is a Scam and She's Not Attending

There’s Something Strange About How These Stars Are Moving, Scientists Say

Astronomers are puzzled by the strange behavior of a crooked cluster of stars, which appears to be following an alternative theory of gravity.

Astronomers are puzzled by the strange behavior of certain crooked clusters of stars, which appear to be violating our conventional understanding of gravity.

Massive clusters of stars usually are bound together in spirals at the center of galaxies. Some of these clusters fall under a category astrophysicists call open star clusters, which are created in a relatively short period of time as they ignite in a huge cloud of gas.

During this process, loose stars accumulate in a pair of "tidal tails," one of which is being pulled behind, while the other moves ahead.

"According to Newton’s laws of gravity, it’s a matter of chance in which of the tails a lost star ends up," Jan Pflamm-Altenburg of the University of Bonn in Germany, co-author of a new paper published in the Monthly Notices of the Royal Astronomical Society, in a statement. "So both tails should contain about the same number of stars."

But some of their recent observations seemingly defy conventional physics.

"However, in our work we were able to prove for the first time that this is not true," Pflamm-Altenburg added. "In the clusters we studied, the front tail always contains significantly more stars nearby to the cluster than the rear tail."

In fact, their new findings are far more in line with a different theory called "Modified Newtonian Dynamics" (MOND).

"Put simply, according to MOND, stars can leave a cluster through two different doors," Pavel Kroupa, Pflamm-Altenburg's colleague at the University of Bonn and lead author, explained in the statement. "One leads to the rear tidal tail, the other to the front."

"However, the first is much narrower than the second — so it’s less likely that a star will leave the cluster through it," he added. "Newton’s theory of gravity, on the other hand, predicts that both doors should be the same width."

The researchers' simulations, taking MOND into consideration, could explain a lot. For one, they suggest that open star clusters survive a much shorter period of time than what is expected from Newton's laws of physics.

"This explains a mystery that has been known for a long time," Kroupa explained. "Namely, star clusters in nearby galaxies seem to be disappearing faster than they should."

But not everybody agrees that Newton's laws should be replaced with MOND, something that could shake the foundations of physics.

"It’s somewhat promising, but it does not provide completely definitive evidence for MOND," University of Saint Andrews research fellow Indranil Banik told New Scientist. "This asymmetry does make more sense in MOND, but in any individual cluster there could be other effects that are causing it — it’s a bit unlikely that would happen in all of them, though."

The researchers are now trying to hone in on an even more accurate picture by stepping up the accuracy of their simulations, which could either support their MOND theory — or conclude that Newton was, in fact, correct the first time around.

More on star clusters: Something Is Ripping Apart the Nearest Star Cluster to Earth

The post There's Something Strange About How These Stars Are Moving, Scientists Say appeared first on Futurism.

Read the rest here:

There's Something Strange About How These Stars Are Moving, Scientists Say

NASA Sets Launch Date for Mission to $10 Quintillion Asteroid

After disappointing setbacks and delays, NASA has finally got its mission to an invaluable asteroid made of precious metals back on track.

Rock of Riches

After disappointing setbacks and a delay over the summer, NASA says it's finally reviving its mission to explore a tantalizing and giant space rock lurking deep in the Asteroid Belt.

Known as 16 Psyche, the NASA-targeted asteroid comprises a full one percent of the mass of the Asteroid Bet, and is speculated to be the core of an ancient planet. But Psyche's size isn't what intrigues scientists so much as its metal-rich composition, believed to be harboring a wealth of iron, nickel, and gold worth an estimated $10 quintillion — easily exceeding the worth of the Earth's entire economy. Although, to be clear, they're not interested in the metals' monetary value but rather its possibly planetary origins.

Back On Track

Initially slated to launch in August 2022, NASA's aptly named Psyche spacecraft became plagued with a persistent flight software issue that led the space agency to miss its launch window that closed on October 11.

But after surviving an independent review determining whether the mission should be scrapped or not, NASA has formally announced that its spacecraft's journey to Psyche will be going ahead, planned to launch aboard a SpaceX Falcon Heavy rocket as early as October 10, 2023.

"I'm extremely proud of the Psyche team," said Laurie Leshin, director of NASA's Jet Propulsion Laboratory, in a statement. "During this review, they have demonstrated significant progress already made toward the future launch date. I am confident in the plan moving forward and excited by the unique and important science this mission will return."

Although the new launch date is only a little over a year late, the expected arrival at the asteroid Psyche is set back by over three years — 2029 instead of 2026 — due to having to wait for another opportunity to slingshot off of Mars' gravity.

Peering Into a Planet

Once it arrives, the NASA spacecraft will orbit around the asteroid and probe it with an array of instruments, including a multispectral imager, gamma ray and neutron spectrometers, and a magnetometer, according to the agency.

In doing so, scientists hope to determine if the asteroid is indeed the core of a nascent planet known as a planetesimal. If it is, it could prove to be an invaluable opportunity to understand the interior of terrestrial planets like our own.

More on NASA: NASA Announces Plan to Fix Moon Rocket, and Maybe Launch It Eventually

The post NASA Sets Launch Date for Mission to $10 Quintillion Asteroid appeared first on Futurism.

Here is the original post:

NASA Sets Launch Date for Mission to $10 Quintillion Asteroid

Twitter Working on Plan to Charge Users to Watch Videos

According to an internal email obtained by The Washington Post, Musk wants to have Twitter charge users to view videos posted by content creators.

Now that Tesla CEO Elon Musk has taken over Twitter, the billionaire has been frantically shuffling through ambitious plans to turn the ailing social media platform into a revenue-driving business.

Case in point, according to internal email obtained by The Washington Post, Musk is plotting for Twitter to charge users to view videos posted by content creators and take a cut of the proceeds — a highly controversial idea that's already been met with internal skepticism.

The team of Twitter engineers has "identified the risk as high" in the email, citing "risks related to copyrighted content, creator/user trust issues, and legal compliance."

In short, Musk is blazing ahead with his infamously ambitious timelines — a "move fast and break things" approach that could signify a tidal change for Twitter's historically sluggish approach to launching new features.

Musk has already made some big structural changes to Twitter, having fired high-up positions at the company and dissolved its board of directors.

The company will also likely be facing mass layoffs, according to The Washington Post.

The new feature detailed in the new email, which is being referred to as "Paywalled Video," allows creators to "enable the paywall once a video has been added to the tweet" and chose from a preset list of prices, ranging from $1 to $10.

"This will also give Twitter a revenue stream to reward content creators," Musk tweeted on Tuesday, adding that "creators need to make a living!"

But whether Twitter users will be willing to pay for stuff that was previously free remains anything but certain.

Musk has already announced that he is planning to charge $8 a month for Twitter users to stay verified, which has been met with derision.

The billionaire CEO is facing an uphill battle. Now that the company is private, he has to pay around $1 billion in annual interest payments, a result from his $44 buyout, according to the WaPo.

Compounding the trouble, Reuters reported last week that Twitter is bleeding some of its most active users.

Meanwhile, Musk's chaotic moves are likely to alienate advertisers, with the Interpublic Group, a massive inter-agency advertising group, recommending that its clients suspend all paid advertising for at least the week.

That doesn't bode well. It's not out of the question that a paywalled video feature may facilitate the monetization of pornographic content, which may end up scaring off advertisers even further — but Twitter's exact intentions for the feature are still unclear.

According to Reuters, around 13 percent of the site's content is currently marked not safe for work (NSFW).

It's part of Musk's attempt to shift revenue away from advertising on the platform. In a tweet last week, he promised advertisers that Twitter wouldn't become a "free-for-all hellscape."

But that hasn't stopped advertisers from already leaving in droves.

All in all, a paywalled video feature could mark a significant departure for Twitter, a platform still primarily known for short snippets of text.

For now, all we can do is watch.

READ MORE: Elon Musk’s Twitter is working on paid-video feature with ‘high’ risk [The Washington Post]

More on Twitter: Elon Musk Pleads With Stephen King to Pay for Blue Checkmark

The post Twitter Working on Plan to Charge Users to Watch Videos appeared first on Futurism.

Read more here:

Twitter Working on Plan to Charge Users to Watch Videos

This Deepfake AI Singing Dolly Parton’s "Jolene" Is Worryingly Good

Holly Herndon uses her AI twin Holly+ to sing a cover of Dolly Parton's

AI-lands in the Stream

Sorry, but not even Dolly Parton is sacred amid the encroachment of AI into art.

Holly Herndon, an avant garde pop musician, has released a cover of Dolly Parton's beloved and frequently covered hit single, "Jolene." Except it's not really Herndon singing, but her digital deepfake twin known as Holly+.

The music video features a 3D avatar of Holly+ frolicking in what looks like a decaying digital world.

And honestly, it's not bad — dare we say, almost kind of good? Herndon's rendition croons with a big, round sound, soaked in reverb and backed by a bouncy, acoustic riff and a chorus of plaintive wailing. And she has a nice voice. Or, well, Holly+ does. Maybe predictably indie-folk, but it's certainly an effective demonstration of AI with a hint of creative flair, or at least effective curation.

Checking the Boxes

But the performance is also a little unsettling. For one, the giant inhales between verses are too long to be real and are almost cajolingly dramatic. The vocals themselves are strangely even and, despite the somber tone affected by the AI, lack Parton's iconic vulnerability.

Overall, it feels like the AI is simply checking the boxes of what makes a good, swooning cover after listening to Jeff Buckley's "Hallelujah" a million times — which, to be fair, is a pretty good starting point.

Still, it'd be remiss to downplay what Herndon has managed to pull off here, and the criticisms mostly reflect the AI's limited capabilities more than her chops as a musician. The AI's seams are likely intentional, if her previous work is anything to go off of.

Either way, if you didn't know you were listening to an AI from the get-go, you'd probably be fooled. And that alone is striking.

The Digital Self

Despite AI's usually ominous implications for art, Herndon views her experiment as a "way for artists to take control of their digital selves," according to a statement on her website.

"Vocal deepfakes are here to stay," Herndon was quoted saying. "A balance needs to be found between protecting artists, and encouraging people to experiment with a new and exciting technology."

Whether Herndon's views are fatalistic or prudently pragmatic remains to be seen. But even if her intentions are meant to be good for artists, it's still worrying that an AI could pull off such a convincing performance.

More on AI music: AI That Generates Music from Prompts Should Probably Scare Musicians

The post This Deepfake AI Singing Dolly Parton's "Jolene" Is Worryingly Good appeared first on Futurism.

Excerpt from:

This Deepfake AI Singing Dolly Parton's "Jolene" Is Worryingly Good

Manslaughter Case Has a Strange Twist: Tesla That Killed Couple Was on Autopilot

A court case is about to kick off in Los Angeles later this month, involving a fatal crash caused by a Tesla vehicle, which was on Autopilot.

A provocative manslaughter case is about to kick off in Los Angeles later this month, involving a fatal crash caused by a Tesla vehicle that had the company's controversial Autopilot feature turned on.

It's the first case of its kind, and one that could set a precedent for future crashes involving cars and driver-assistance software, Reuters reports.

We won't know the exact defense until the case gets under way, but the crux is that the man who was behind the wheel of the Tesla is facing manslaughter charges — but has pleaded not guilty, setting up potentially novel legal arguments about culpability in a deadly collision when, technically speaking, it wasn't a human driving the car.

"Who's at fault, man or machine?" asked Edward Walters, an adjunct professor at the Georgetown University, in an interview with Reuters. "The state will have a hard time proving the guilt of the human driver because some parts of the task are being handled by Tesla."

The upcoming trial is about a fatal collision that took place in 2019. The crash involved Kevin George Aziz Riad, who ran a red light in his Tesla Model S, and collided with a Honda Civic, killing a couple who were reportedly on their first date.

According to vehicle data, Riad did not apply the brakes but had a hand on the steering wheel. Perhaps most critically, though, the Tesla's Autopilot feature was turned on in the moments leading up to the crash.

Riad is facing manslaughter charges, with prosecutors arguing his actions were reckless.

Meanwhile, Riad's lawyers have argued that he shouldn't be charged with a crime, but have so far stopped short of publicly placing blame on Tesla's Autopilot software.

Tesla is not directly implicated in the upcoming trial and isn't facing charges in the case, according to Reuters.

A separate trial, however, involving the family of one of the deceased is already scheduled for next year — but this time, Tesla is the defendant.

"I can't say that the driver was not at fault, but the Tesla system, Autopilot, and Tesla spokespeople encourage drivers to be less attentive," the family's attorney Donald Slavik told Reuters.

"Tesla knows people are going to use Autopilot and use it in dangerous situations," he added.

Tesla is already under heavy scrutiny over its Autopilot and so-called Full Self-Driving software, despite conceding that the features "do not make the vehicle autonomous" and that drivers must remain attentive of the road at all times.

Critics argue that Tesla's marketing is misleading and that it's only leading to more accidents — not making the roads safer, as Tesla CEO Elon Musk has argued in the past.

In fact, a recent survey found that 42 percent of Tesla Autopilot said they feel "comfortable treating their vehicles as fully self-driving."

Regulators are certainly already paying attention. The news comes a week after Reuters revealed that the Department of Justice is investigating Tesla over Autopilot.

Last year, the National Highway Traffic Safety Administration (NHTSA) announced an investigation of accidents in which Teslas have smashed into emergency response vehicles that were pulled over with sirens or flares.

This month's trial certainly stands the chance of setting a precedent. Was Riad fully at fault or was Tesla's Autopilot at least partially to blame as well?

The answer now lies in the hands of a jury.

READ MORE: Tesla crash trial in California hinges on question of 'man vs machine' [Reuters]

More on Autopilot: Survey: 42% of Tesla Autopilot Drivers Think Their Cars Can Drive Themselves

The post Manslaughter Case Has a Strange Twist: Tesla That Killed Couple Was on Autopilot appeared first on Futurism.

Go here to see the original:

Manslaughter Case Has a Strange Twist: Tesla That Killed Couple Was on Autopilot

Scientists Found a Way to Control How High Mice Got on Cocaine

A team of neuroscientists at the University of Wisconsin claim to have found a way to control how high mice can get on cocaine.

A team of neuroscientists at the University of Wisconsin claim to have found a way to control how high mice can get on a given amount of cocaine.

And don't worry — while that may sound like a particularly frivolous plot concocted by a team of evil scientists, the goal of the research is well-meaning.

The team, led by University of Wisconsin neuroscientist Santiago Cuesta, was investigating how the gut microbiome can influence how mice and humans react to ingesting the drug.

The research, detailed in a new paper published this week in the journal Cell Host & Microbe, sheds light on a vicious feedback loop that could explain cases of substance abuse disorders — and possibly lay the groundwork for future therapeutic treatments.

In a number of experiments on mice, the researchers found that cocaine was linked to the growth of common gut bacteria, which feed on glycine, a chemical that facilitates basic brain functions.

The lower the levels of glycine in the brain, the more the mice reacted to the cocaine, exhibiting abnormal behaviors.

To test the theory, the scientists injected the mice with a genetically modified amino acid which cannot break down glycine. As a result, the behavior of mice returned to normal levels.

In other words, the amino acid could curb cocaine addiction-like behaviors — at least in animal models.

"The gut bacteria are consuming all of the glycine and the levels are decreasing systemically and in the brain," said Vanessa Sperandio, senior author, and microbiologist from the University of Wisconsin, in a statement. "It seems changing glycine overall is impacting the glutamatergic synapses that make the animals more prone to develop addiction."

It's an unorthodox approach to treating addiction, but could be intriguing — if it works in people, that is.

"Usually, for neuroscience behaviors, people are not thinking about controlling the microbiota, and microbiota studies usually don't measure behaviors, but here we show they’re connected," Cuesta added. "Our microbiome can actually modulate psychiatric or brain-related behaviors."

In short, their research could lead to new ways of treating various psychiatric disorders such as substance use by adjusting the gut microbiome and not making changes to the brain chemistry.

"I think the bridging of these communities is what's going to move the field forward, advancing beyond correlations towards causations for the different types of psychiatric disorders," Sperandio argued.

READ MORE: How gut bacteria influence the effects of cocaine in mice [Cell Press]

More on addiction: Study: Magic Mushrooms Helped 83% of People Cut Excessive Drinking

The post Scientists Found a Way to Control How High Mice Got on Cocaine appeared first on Futurism.

Follow this link:

Scientists Found a Way to Control How High Mice Got on Cocaine

Scientists Spot "Stripped, Pulsating Core" of Star Caused By Horrific Accident

In a

Core Dump

Scientists studying a group of stars made an astonishing but "serendipitous" discovery when they realized that Gamma Columbae, a fairly average celestial body, might actually be the "stripped pulsating core of a massive star," according to a study published this week in Nature Astronomy.

If true, that means Gamma Columbae is missing the envelope, or vast shroud of gas, that hides a star's nuclear fusion powered core.

What caused the stripping of this atmospheric envelope is not definitively known, but the scientists posit that Gamma Columbae running out of hydrogen could've caused its envelope to expand and swallow up a nearby star, likely its binary partner. But in the middle of that relatively common process, something appears to have horrifically gone wrong and ejected the envelope — and possibly even led to the two stars merging.

Naked Core

Before the disaster, the scientists believe Gamma Columbae could have been up to 12 times the mass of our Sun. Now, it's a comparatively meager 5 stellar masses.

Although a naked stellar core missing its envelope has been theorized to exist, it's never been observed in a star this size.

"Having a naked stellar core of such a mass is unique so far," said study co-author Norbert Pryzbilla, head of the Institute for Astro- and Particle Physics at the University of Innsbruck, in an interview with Vice.

Astronomers had an idea of what the cores of massive and low mass stars looked like, Pryzbilla continued, but there wasn't "much evidence" for cores of masses in between.

Star Power

It's an exceedingly rare find because the star is in a "a short-lived post-stripping structural re-adjustment phase" that will only last 10,000 years, according to the study.

That's "long for us humans but in astronomical timescales, very, very short," Przybilla told Vice. "It will always stay as a peculiar object."

The opportunity to study such a rarely exposed stellar core could provide scientists an invaluable look into the evolution of binary star systems. And whatever astronomers learn from the star, it's a fascinating glimpse at stellar destruction at a nearly incomprehensible scale.

More on stars: Black Hole Spotted Burping Up Material Years After Eating a Star

The post Scientists Spot "Stripped, Pulsating Core" of Star Caused By Horrific Accident appeared first on Futurism.

See the rest here:

Scientists Spot "Stripped, Pulsating Core" of Star Caused By Horrific Accident

US Gov to Crack Down on "Bossware" That Spies On Employees’ Computers

In the era of remote work, employers have turned to invasive

Spying @ Home

Ever since the COVID-19 pandemic drove a wave of working from home, companies have been relentless in their efforts to digitally police and spy on remote employees by using what's known as "bossware." That's the pejorative name for software that tracks the websites an employee visits, screenshots their computer screens, and even records their faces and voices.

And now, the National Labor Relations Board (NLRB), an agency of the federal government, is looking to intervene.

"Close, constant surveillance and management through electronic means threaten employees' basic ability to exercise their rights," said NLRB general counsel Jennifer Abruzzo, in a Monday memo. "I plan to urge the Board to apply the Act to protect employees, to the greatest extent possible, from intrusive or abusive electronic monitoring and automated management practices."

Undoing Unions

In particular, Abruzzo is worried about how bossware could infringe on workers' rights to unionize. It's not hard to imagine how such invasive surveillance could be used to bust unionization. Even if the technology isn't explicitly deployed to impede organization efforts, the ominous presence of the surveillance on its own can be a looming deterrent, which Abruzzo argues is illegal.

And now is the perfect moment for the NLRB to step in. The use and abuse of worker surveillance tech in general — not just bossware — has been "growing by the minute," Mark Gaston Pearce, executive director of the Workers' Rights Institute at Georgetown Law School, told CBS.

"Employers are embracing technology because technology helps them run a more efficient business," Gaston explained. "… What comes with that is monitoring a lot of things that employers have no business doing."

Overbearing Overlord

In some ways, surveillance tech like bossware can be worse than having a nosy, actual human boss. Generally speaking, in a physical workplace employees have an understanding of how much privacy they have (unless they work at a place like Amazon or Walmart, that is).

But when bossware spies on you, who knows how much information an employer could be gathering — or even when they're looking in. And if it surveils an employee's personal computer, which more often than not contains plenty of personal information that a boss has no business seeing, that's especially invasive.

Which is why Abruzzo is pushing to require employers to disclose exactly how much they're tracking.

It's a stern message from the NLRB, but at the end of the day, it's just a memo. We'll have to wait and see how enforcing it pans out.

More on surveillance: Casinos to Use Facial Recognition to Keep "Problem Gamblers" Away

The post US Gov to Crack Down on "Bossware" That Spies On Employees' Computers appeared first on Futurism.

See original here:

US Gov to Crack Down on "Bossware" That Spies On Employees' Computers

Huge Drone Swarm to Form Giant Advertisement Over NYC Skyline

Someone apparently thought it was a great idea to fly 500 drones over NYC as part of an ad experiment without much warning.

Droning On

Someone thinks it's a great idea to fly 500 drones over New York City to create a huge ad in the sky on Thursday evening. Because New Yorkers certainly don't have any historical reason to mistrust unknown aircraft over their skyline, right?

As Gothamist reports, the drone swarm is part of a "surreal takeover of New York City’s skyline" on behalf of — we shit you not — the mobile game Candy Crush.

Fernanda Romano, Candy Crush's chief marketing officer, told Gothamist that the stunt will "turn the sky into the largest screen on the planet" using the small, light-up drones.

Though this is not the first time the Manhattan skyline has been used as ad space — that distinction goes to the National Basketball Association and State Farm, which did a similar stunt this summer during the NBA draft — local lawmakers are ticked off about it nonetheless.

"I think it’s outrageous to be spoiling our city’s skyline for private profit," Brad Hoylman, a state senator that represents Manhattan's West Side in the NY Legislature, told the local news site. "It’s offensive to New Yorkers, to our local laws, to public safety, and to wildlife."

Freak Out

Indeed, as the NYC Audubon Society noted in a tweet, the Candy Crush crapshoot "could disrupt the flight patterns of thousands of birds flying through NYC, leading to collisions with buildings" as they migrate.

Beyond the harm this will do to birds and the annoyance it will undoubtedly cause the famously-grumpy people of New York, this stunt is also going down with very little warning, considering that Gothamist is one of the only news outlets even reporting on it ahead of time.

While most viewers will hopefully be able to figure out what's going on pretty quickly, the concept of seeing unknown aircraft above the skyline is a little too reminiscent of 9/11 for comfort — and if Candy Crush took that into consideration, they haven't let on.

So here's hoping this event shocks and awes Thursday night city-goers in a good way, and not in the way that makes them panic.

More drone warfare: Russia Accused of Pelting Ukraine Capital With "Kamikaze" Drones

The post Huge Drone Swarm to Form Giant Advertisement Over NYC Skyline appeared first on Futurism.

Go here to read the rest:

Huge Drone Swarm to Form Giant Advertisement Over NYC Skyline