Daily Archives: July 12, 2021

AI Race: Why India is Lagging Behind the US and China in 2021? – Analytics Insight

Posted: July 12, 2021 at 7:52 am

Being the second most populated country in the world, India is still lagging behind the US and China in the AI race, even in 2021. The US is positioned at the number one in the AI race for a long time while China is motivated to take over the position. These two countries are currently the leaders of Artificial Intelligence with proper infrastructure for R&D. It is a well-known fact that India is a developing country and the US, as well as China, are developed countries in the world. But there are other reasons for India to lag behind these two countries in the AI race. This article explores a few reasons for the readers to have a better understanding.

India is thriving with a domestic market with skilled laborers in 2021. But, there is a lack of reputed technology companies in the market to invest time in R&D for innovating machines and models with cutting-edge technologies. The US and China have Google, Microsoft, Baidu, and Alibaba respectively to create new innovations for the welfare of society. Google, IBM, Microsoft, and many other reputed tech companies have extended their market in India and are recruiting Indian employees for better productivity. But India is lagging behind in the AI race because the country does not have crazy and obsessed entrepreneurs like Elon Musk and Jeff Bezos.

India is one of the most educated countries in the world, with numerous educational institutions. Gradually the education sector is integrating technical courses and curriculum in Artificial Intelligence, Machine Learning, and Robotics in the form of Mechatronics. Students only know about those five traditional engineering courses but there is a wide array of engineering in these disruptive technological fields. They are slowly taking interest in these fields due to more exposure to globalization and digitization. There are a handful of Ph.D. scholars or engineers who are highly interested to develop new machines with these cutting-edge technologies. Thus, it will take some time for India to educate students and inspire them to be in the field of Artificial Intelligence and innovate new AI models efficiently and effectively.

Another reason for India to lag behind the US and China in the AI race is the lack of publishing research papers. China and the US have each published more than 15,000 AI research papers in recent years. It is observed that the average US research quality is better than in China or the EU. The US is becoming the world leader in designing AI chips for smart systems. India has integrated Artificial Intelligence and machine learning only in the field of computer science. India is required to boost research tax incentives as well as expand the array of public research institutions to work on AI research papers. This will help in the creation of better efficient machine learning algorithms to take a lead in the AI race.

China has strict control over its population index with specific rules and regulations, for citizens to manage data explosion efficiently. The country receives sufficient and appropriate volumes of real-time data to train Artificial Intelligence models. It has a strategic priority to drive the Chinese tech companies in creating a plethora of potential AI applications with this data. Meanwhile, India does not have control over the population along with undocumented citizens. The rural sector still lacks proper internet connection, leading to a digital divide in the country. It is more difficult for India to receive appropriate data from both urban and rural sectors.

The Indian government needs to articulate an ambitious mission on Artificial Intelligence for more innovations. The government is required to understand that India needs Artificial Intelligence to drive success and revenue for the nearby future. There are multiple sectors that need AI to boost productivity. There are certain progressive start-ups growing up in the domestic market to help the industries.

But it is better late than never. India should realize the power of useful knowledge from the enormous sets of data available due to digitization. This knowledge will help the country to solve its own problems and achieve five-year plans efficiently and effectively. There should be a key focus on developing advanced and modern data infrastructure to be in the AI race with the US and China.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Read the rest here:

AI Race: Why India is Lagging Behind the US and China in 2021? - Analytics Insight

Posted in Ai | Comments Off on AI Race: Why India is Lagging Behind the US and China in 2021? – Analytics Insight

The new world of work: You plus AI – VentureBeat

Posted: at 7:52 am

Join AI & data leaders at Transform 2021 on July 12th for the AI/ML Automation Technology Summit. Register today.

Emerging technologies meet both advocates and resistance as users weigh the potential benefits with the potential risks. To successfully implement new technologies, we must start small, in a few simplified forms, fitting a small number of use cases to establish proof of concept before scaling usage. Artificial intelligence is no exception, but with the added challenge of intruding into the cognitive sphere, which has always been the prerogative of humans. Only a small circle of specialists understand how this technology works therefore, more education to the broader public is needed as AI becomes more and more integrated into society.

I recently connected with Josh Feast, CEO and cofounder of Boston-based AI company Cogito, to discuss the role of AI in the new era of work. Heres a look into our conversation.

Igor Ikonnikov: Artificial intelligence can be an incredibly powerful tool, as you know from your experience founding and growing an AI-based company. But there are plenty of people who have expressed concerns around its impact on the workforce and whether this new technology will replace them one day. So lets cover that topic first: Do you have any concerns about AI coming for jobs?

Josh Feast: Youre right, this question has been asked many times in recent years. I believe it is time to focus on how we can shape the AI and human relationship to ensure were happy with the outcome, rather than being bystanders to an uncertain future. What I mean is, were living in a world where humans and machines are and will continue to work alongside each other. So, instead of fighting technological progress, we must embrace and harness it. Our emotionality as humans will always ensure we remain key assets in the workplace, even as companies deploy AI technology to revolutionize the modern enterprise. The idea is not to replace humans but to augment or simply help them with technology.

David De Cremer, Provosts Chair and Professor at NUS Business School, and Garry Kasparov, chairman of the Human Rights Foundation and founder of the Renew Democracy Initiative, agree. They previously explained, The question of whether AI will replace human workers assumes that AI and humans have the same qualities and abilities but, in reality, they dont. AI-based machines are fast, more accurate, and consistently rational, but they arent intuitive, emotional, or culturally sensitive. It is in combining the strengths of AI and humans that we can be even more effective.

Ikonnikov: The last 15 months have been disruptive in many ways including the steep increase in both the value of in-person interactions and the need for higher degree of automation. Is this the opportunity to combine the strengths?

Feast: More than a year in, with remote work now a norm for millions of people, almost everything we do is digitized and mediated by technology. Weve seen improvements in efficiency and productivity, but also a growing need to fill the empathy deficit and increase energy and positive interactions. In other words, AI is already working in symbiosis with humans, so its up to us to define what we want that partnership to look like going forward. This consideration requires an open mind, active optimism, and empathy to see the full potential of the human-AI relationship. I believe this is where human-aware technology can play a big role in shaping the future.

Ikonnikov: Can you elaborate on what human-aware technology is?

Feast: Human-aware technology has the ability to sense what humans need in the moment to better augment our innate skills including the ability to respond to and support our emotional and social intelligence. It opens new doors for technological augmentation in new areas. An example of this today is smart prosthetics, which lean on human-machine interfaces that help prosthetic limbs truly feel like an extension of the body, like the robotic arm being developed at Johns Hopkins Applied Physics Laboratory. Complete with humanlike reflexes and sensations, the robotic arm contains sensors that give feedback on temperature and vibration, as well as collect the data to mimic what human limbs are able to detect. As a result, it responds much like a normal arm.

The same concept applies to humans working at scale in an enterprise where a significant part of our jobs involves collaborating with other people. Sometimes, in these interactions, we miss cues, get triggered, or fail to see another persons perspective. Technology can support us here as an objective recognizer of patterns and cues.

Ikonnikov: As we continue to leverage this human-aware AI, youve said we must find a balance between machine intelligence and human intelligence. How does that translate to the workplace?

Feast: Finding that balance and optimizing for it to successfully address workplace challenges requires several levers to be pulled.

In order to empower AI to help us, we must actively and thoughtfully shape the AI the more we do so, the more helpful it will be to individuals and organizations. In fact, a team from Microsofts Human Understanding and Empathy group believes that, with the right training, AI can better understand its users, more effectively communicate with them, and improve their interactions with technology. We can train the technology through similar processes that we train people with rewarding it on the achievement of external goals like completing a task on time, but also on the achievement of our internal goals like maximizing our satisfaction, otherwise known as extrinsic and intrinsic rewards. In giving AI data about what works for us intrinsically, we increase its ability to support us.

Ikonnikov: As the workplace evolves and AI becomes more ingrained in our daily workflows, what would the outcome look like?

Feast: Increased success at work will come when organizations leverage humans, paired with AI, to drive an enhanced experience in the moments that matter most. It is those in-the-moment interactions where the new wave of opportunity arises.

For example, in an in-person conversation, both participants initiate, detect, interpret, and react to each others social signals in what some may call a conversational dance. This past year, weve all had to communicate over video and voice calls, challenging the nature of that conversational dance. In the absence of other methods of communication such as eye contact, body language, and shared in-person experiences, voice (and now video) becomes the only way a team member or manager can display emotion in a conversation. Whether its a conversation between an employee and customer or employee and manager, these are make-or-break moments for a business. Human-aware AI that is trained by humans in the same way we train ourselves can augment our abilities in these scenarios by supporting us when it matters and driving better outcomes.

Ikonnikov: There has been a big shift in AI conversations recently as it relates to regulations. The European Union, for example, unveiled a strict proposal governing the use of AI, a first-of-its-kind policy. Do you think AI needs to be regulated better?

Feast: Collectively, we have an obligation to create technology that is effective and fair for everyone were not here to build whatever can be built without limits or constraints when it comes to peoples fundamental rights. This means we have a responsibility to regulate AI.

The first step to successful AI regulation is data regulation. Data is a pivotal resource that defines the creation and deployment of AI. Were already seeing unintended consequences of unregulated AI. For example, there isnt a level playing field across organizations when it comes to AI deployment because there is a stark difference company-to-company based on the amount and quality of data they have. This imbalance will impact the development of technology, the economy, and more. We, as leaders and brands, must actively work with regulatory bodies to create common parameters to level the playing field and increase trust in AI.

Ikonnikov: How can creators of AI technology earn that trust?

Feast: We have to be focused on implementing ethical AI by delivering transparency into the technology and communicating a clear benefit to all users. This extends to supplying education and upskilling opportunities. We also have to actively mitigate the underlying biases of the models and systems deployed. AI leaders and creators must do extensive research on de-biasing approaches for examining gender and racial bias, for example. This is an important step to take on the path to increasing trust in AI and responsibly implementing the technology across organizations and populations.

We also must ensure there is opportunity given to creators of AI who are diverse themselves who have diverse demographics, immigration status, and backgrounds. It is the creators who define what problems we choose to address with AI, and more diverse creators will result in AI addressing a broader range of problems.

Without these parameters without trust we cant fully reap all the benefits of AI. On the flip side, if we get this right and, as creators of AI and leaders of related organizations, do the work to earn trust and thoughtfully shape AI, the result will be responsible AI that truly works in symbiosis with us, more effectively supporting us as we forge the future of work.

Go here to read the rest:

The new world of work: You plus AI - VentureBeat

Posted in Ai | Comments Off on The new world of work: You plus AI – VentureBeat

Sleep apnea AI tool uses 0s and 1s to increase ZZZs – Sanford Health News

Posted: at 7:52 am

Doctors at Sanford Health will soon use augmented intelligence to scan electronic medical records for dozens of factors that could indicate a patient suffers from obstructive sleep apnea.

Those 67 indicators include body mass index (BMI), age, gender, medical history, clinical symptoms and blood work, among other factors. The physician also conducts a sleepiness questionnaire asking people how tired they are throughout the day. Combined, the information outputs a score of how likely the patient has the condition that afflicts millions of Americans by reducing airflow as they sleep.

Max Weaver, the Sanford Health business intelligence analyst who developed the tool, said the goal is simple.

Helping physicians provide the best quality of care, he said. This model does that by using mathematical modeling to narrow the population down. Its really from 1 million to perhaps 50,000. That kind of scale.

Kevin Faber, M.D., is chair and medical director of sleep medicine at Sanford Health in Fargo, North Dakota, and the projects provider champion. He said its a powerful way to reduce unneeded testing, maximize the physicians time and help patients.

Its a tool to help identify risk. Its not the diagnosis. It doesnt replace the need for a sleep test. It doesnt replace the need for the sleep consult for many patients. But its a tool that can help the primary care practitioner be ultra-efficient with his or her time, as they have precious few minutes with their patients and need to do the things that have the biggest impact, he said.

This tool will allow them to then identify those patients at highest risk, so we can treat them for a condition that they didnt know they had.

Unlike many medical conditions accompanied by pain, discomfort or visible symptoms that prompt people to go to the doctor, most sleep apnea sufferers are unaware of it, Dr. Faber said.

The problem is we need a way to identify the group of people who dont know they have the condition and therefore dont know to seek care, he said.

Intrinsic sleep disorders like sleep apnea are typically unknown by the patient because theyre happening at a time when the patient cant be aware of it. And the moment they could be aware of it, once they wake up, the problem is instantly gone.

Its especially difficult if the person doesnt have a bed partner or someone who is with them when theyre sleeping to let them know they regularly stop breathing or snore loudly, Dr. Faber said.

There are lots and lots of people who have no clue this is going on, he said of the roughly 1 out of 5 patients who have it.

Which means that we have tens of millions of people in our country alone, including at least hundreds of thousands in the Sanford footprint that likely have at least mild obstructive sleep apnea.

Thats why health care providers need better, more efficient tools that spot possible sleep disorders, so a doctor can quickly and accurately diagnose them and prescribe a treatment, Dr. Faber said.

He likens sleep apnea to being the base, a contributing cause, of a whole pyramid of metabolic, cardiovascular and neurocognitive health problems. When people are finally diagnosed, only then do some realize how much their poor sleep contributed to overall poor health.

They dont know that that untreated sleep apnea is what is causing their blood pressure to be so difficult to control, or their diabetes to be so difficult to manage. That it impacts their depression or their anxiety so much and thats why theyre having a harder time controlling it, Dr. Faber said. Sleeping pills dont help because the issue isnt that you need to sedate your brain. The issue is you cant stay asleep because you stop breathing over and over.

Thats where big data and tools like this AI project come in, he said.

Each persons electronic medical record already stores countless vital signs, laboratory results, medical history and other data. The artificial intelligence filters through all that information and ranks each persons chance of having sleep apnea as low, medium or high. Smoking is a risk factor, for example, but not for others who dont smoke.

If you have only a couple of those risk factors, you would be at very low risk. If you had 50 out of those 67, youd imagine Holy smokes, theyre at a much higher risk. Theres a weighting of each of those risk factors. Each has a different impact, Dr. Faber said.

Besides showing the overall risk, the tool displays the top five factors driving the score in that patient, which will change over time if the person, for example, stops smoking, loses weight or controls their diabetes.

This AI algorithm automatically adjusts all of that, Dr. Faber said. For the primary care provider who wants to know, Why is my patient at high risk? he or she can simply mouse over the icon and theres your top five risk factors for that patient. Six months later those top five might be different. There will be an automatic re-analysis of every patients risk every month.

Once a patient is identified as being at higher risk, the provider may either refer them to the sleep clinic for evaluation and a traditional sleep study or they may order a home sleep test conducted through their primary clinic. These tests monitor the patients sleep to count how many times they stop breathing during the night.

Its not feasible for the number of patients we have in Sanford Health population to administer a traditional sleep study on every patient, Weaver said. So that provider, rather than sifting through hundreds of data points about one patient, let alone all the patients they have, can see whos at the highest risk to administer that sleep apnea test. It really narrows it down.

Patients with mild sleep apnea likely wont need an additional sleep study if the home sleep test identifies the problem and initial treatment is effective and well-tolerated by the patient, Dr. Faber said. That saves them time and money. It also prevents unnecessary delays in treatment along with the additional travel and associated expenses, he said.

The main treatment options were once weight loss and continuous positive airway pressure (CPAP) therapy. Now the options also include oral appliances that move the lower jaw forward, some surgeries and Inspire therapy that uses an implanted device for those who dont tolerate CPAP.

All of this stuff is not to simply identify whos at risk but to find whatever the right treatment for their apnea is, which is going to vary from one person to another, Dr. Faber said.

I have some amazing stories, tear-jerking stories actually, of the success that getting rid of moderate to severe sleep apnea can have on somebodys life who was previously unable to treat it because they couldnt tolerate having a mask on their face.

Weaver has validated the model and received Sanford Health stakeholder approval and now is working with the technology team to add the AI tool to the emergency medical record system. Weaver said hes unaware of anything else like it on the market.

Its a big population health initiative that has the potential to help not only the health and welfare and quality of life of the people in the Sanford footprint, Dr. Faber said. But it also helps to lower the cost of care because fewer people have uncontrolled diabetes, hypertension, heart attacks and strokes, all those things that cost the health system, the health plan and therefore individual patients more money.

Posted In Fargo, Innovations, Sleep Medicine, Specialty Care

View original post here:

Sleep apnea AI tool uses 0s and 1s to increase ZZZs - Sanford Health News

Posted in Ai | Comments Off on Sleep apnea AI tool uses 0s and 1s to increase ZZZs – Sanford Health News

Artificial Intelligence Is On The Side Of Apes? Tesla-Fame’s AI-Based ETF Sells Facebook, Walmart And Buys AMC – Markets Insider

Posted: at 7:52 am

The Qraft AI-Enhanced US Large Cap Momentum ETF (NYSE:AMOM), an exchange-traded fund driven by artificial intelligence, has sold a majority of its holdings in Facebook Inc. (NASDAQ:FB) and Walmart Inc. (NYSE:WMT), while loading up on shares in AMC Entertainment Inc. (NYSE:AMC).

What Happened: The ETFs latest portfolio after rebalancing in early July showed that the fund has also sold major chunks ofits holdings, or entirely divested,in home retailer Home Depot Inc. (NYSE:HD), software company Adobe Inc. (NASDAQ:ADBE) and chipmaker Texas Instruments Inc. (NASDAQ:TXN).

The fund has a history of accurately predicting the price movements of electric vehicle makerTesla Inc.'s (NASDAQ:TSLA) shares.

The ETF now has online dating services provider Match Group Inc. (NASDAQ:MTCH), cybersecurity solutions company Fortinet Inc. (NASDAQ:FTNT) and auto parts retailer OReilly Automotive Inc. (NASDAQ:ORLY) as its three largest investments.

Match Group has a 3.65% weighting in the AMOM portfolio, followed by Fortinet and OReilly with 3.5% weighting each.

The other two stocks that make up the top five holdings in AMOM include auto parts retailer AutoZone Inc. (NYSE:AZO) with a 3.1% weighting and enterprise technology company Zebra Technologies Corp. (NASDAQ:ZBRA) with 2.7%.

AMC Entertainment has beenadded to the portfolio this month with a 2.34% weighting. The movie theater chain's stock is up 2078% year-to-date thanks to a short squeeze conducted by retail investors that refer to themselves as "apes."

Prior to the rebalancing, the ETF had Facebook, Walmart, Home Depot, Adobe and Texas Instruments as its five largest stock holdings.

See Also: Best Exchange Traded Funds

Why It Matters: AMOM, a product of South Korea-based fintech group Qraft, tracks 50 large-cap U.S. stocks and reweighs its holdings each month. The fund uses AI technology to automatically search for patterns that have the potential to produce excess returns and construct actively managed portfolios.

AMOM has delivered year-to-date returns of almost 15.1%, compared to its benchmark the Invesco S&P 500 Momentum ETF (NYSE:SPMO) which has returned 14.4% so far this year.

The fund said last week that it has surpassed an important milestone of $50 million in assets under management (AUM), an increase of nearly 1,500% from its $4.22 million total in August last year.

Price Action: Match Group shares closed almost 2.8% higher in Fridays trading session at $162.63, while Fortinet shares closed 1.5% higher at $256.81.

OReilly Automotive shares closed 1.7% higher in Fridays trading session at $591.65.

Read Next: 5 ETFs To Watch In The Second Half Of 2021

Photo by Samantha Celera on Flickr

Link:

Artificial Intelligence Is On The Side Of Apes? Tesla-Fame's AI-Based ETF Sells Facebook, Walmart And Buys AMC - Markets Insider

Posted in Ai | Comments Off on Artificial Intelligence Is On The Side Of Apes? Tesla-Fame’s AI-Based ETF Sells Facebook, Walmart And Buys AMC – Markets Insider

Google CEO Sundar Pichai Cautions the Dangers of Open Web; AI, Quantum Computing to be Highlight for the Next Few Years – Tech Times

Posted: at 7:52 am

Google CEO Sundar Pichai warns that the attack on the open internet has been persisting globally. The company CEO also said that in the upcoming years, the power of quantum computing and artificial intelligence would take over the world.

(Photo : Getty Images/Getty Images for Greentech Festival)BERLIN, GERMANY - SEPTEMBER 16: In this screengrab, Sundar Pichai speaks as part of SWITCH GREEN during day 1 of the Greentech Festival at Kraftwerk Mitte aired on September 16, 2020 in Berlin, Germany. The Greentech Festival is the first festival to celebrate green technology and to accelerate the shift to more sustainability. The festival takes place from September 16 to 18.

In a recent interview withBBC,the CEO said that many countries have been exploiting the free internet for everyone. In addition, there are some of them which limit the information dissemination which is oftentimes the real scenario behind the cameras.

Nowadays, the transition from physical activities to online has been fast-moving, especially since the digital age of the internet is continuously developing. Technological adoption has paved the path for others to access more webs via the internet, but sometimes, it could endanger the users' lives without them knowing.

Internet freedom might be on the tip of our fingers, yet the responsibility in using it is often displaced. Perhaps what Pichai wants to happen is for us to be aware that no one is safe on the internet. Everyone is exposed to the risk of having their data stolen or being suppressed to a lot of misinformation on social media platforms.

Besides warning about the threat circling the web, Pichai also tackled issues like data privacy, taxing technology, and more.

Read Also:Google CEO Sundar Pichai Diet Routine: Why Is He Required of Eating Eggs at This Exact Time?

Back in 2018, the Google manager said that artificial intelligence is more profound than fire and electricity. On Sunday, July 11, The Telegraph reported the same topic involving Pichai.

Pichai noted that AI and quantum computing are the two developments that will have a huge impact on everyone in the future.

Many machines are now capable of copying what humans can do. Surprisingly, some tools can even do tasks better than a normal person. This is what AI does since many activities are considered complicated, the machine is assigned to do them.

If it can do good for humans, there are also some bad impacts that AI could produce. In 2017, Elon Musk said that the "biggest risk" that people face is artificial intelligence. Since artificial intelligence can yield fake outcomes, the South African tycoon mentioned that it's better if the government will regulate its usage.

For the part of quantum computing, some technologists together with the Google CEO support the idea that the said technology will not work for everything. It might be applicable to other methods, but the knowledge about quantum computing might feature new solutions to the world.

In a report by The New York Times viaInc. Magazine,Google boss Sundar Pichai has been receiving a lot of criticisms about his style in leading the company.

Additionally, many employees have been complaining about his slow decision-making which results in delayed action in business. Specifically, it would last for a year until Pichai assigns a particular person to the vacant role at Google. The complainants also mentioned that the company did not acquire Shopify.

Related Article: Alphabet, Google's Q1 Earnings: CEO Sundar Pichai Reveals Hybrid Work Setup, Success in Early 2021

This article is owned by Tech Times

Written by Joseph Henry

2021 TECHTIMES.com All rights reserved. Do not reproduce without permission.

Here is the original post:

Google CEO Sundar Pichai Cautions the Dangers of Open Web; AI, Quantum Computing to be Highlight for the Next Few Years - Tech Times

Posted in Ai | Comments Off on Google CEO Sundar Pichai Cautions the Dangers of Open Web; AI, Quantum Computing to be Highlight for the Next Few Years – Tech Times

IIT Madras develops AI-based algorithm to identify cancer-causing alterations – BSI bureau

Posted: at 7:52 am

The technique will tackle the complexity and size of DNA sequencing datasets and can greatly help in pinpointing key alternations in the genomes of cancer patients

Indian Institute of Technology Madras Researchers has developed an artificial intelligence-based mathematical model to identify cancer-causing alterations in cells.The algorithm uses a relatively unexplored technique of leveraging DNA composition to pinpoint genetic alterations responsible for cancer progression.

The research was led by Prof B Ravindran, Head, RBCDSAI, and Mindtree Faculty Fellow IIT Madras and Dr Karthik Raman, Faculty Member, Robert Bosch Centre for Data Science and AI (RBCDSAI), IIT Madras, and also the Coordinator, Centre for Integrative Biology and Systems Medicine (IBSE), IIT Madras. Shayantan Banerjee, a Masters Student at IIT Madras, performed the experiments and analysed the data. The results have been recently published in the reputed peer-reviewed International JournalCancers.

Explaining the rationale behind this study, Ravindran said, One of the major challenges faced by cancer researchers involves the differentiation between the relatively small number of driver mutations that enable the cancer cells to grow and the large number of passenger mutations that do not have any effect on the progression of the disease.

The researchers hope that the driver mutations predicted through their mathematical model will ultimately help discover potentially novel drug targets and will advance the notion of prescribing the right drug to the right person at the right time.

Elaborating on the need for developing this technique, Dr Raman, said,In most of the previously published techniques researchers typically analysed DNA sequences from large groups of cancer patients, comparing sequences from cancer as well as normal cells and determined whether a particular mutation occurred more often in cancer cells than random. However, this frequentist approach often missed out on relatively rare driver mutations.

Dr Raman further said,Detecting driver mutations, particularly rare ones, is an exceptionally difficult task, and the development of such methods can ultimately accelerate early diagnoses and the development of personalised therapies.

In this study, the researchers decided to look at this problem from a different perspective. The main goal was to discover patterns in the DNA sequences made up of four letters, or bases, A, T, G and C surrounding a particular site of alteration.

The underlying hypothesis was that these patterns would be unique to individual types of mutations drivers and passengers, and therefore could be modelled mathematically to distinguish between the two classes. Using sophisticated AI techniques, the researchers developed a novel prediction algorithm, NBDriver and tested its performance on several open-source cancer mutation datasets.

Read more:

IIT Madras develops AI-based algorithm to identify cancer-causing alterations - BSI bureau

Posted in Ai | Comments Off on IIT Madras develops AI-based algorithm to identify cancer-causing alterations – BSI bureau

AI legislation must address bias in algorithmic decision-making systems – VentureBeat

Posted: at 7:52 am

Join AI & data leaders at Transform 2021 on July 12th for the AI/ML Automation Technology Summit. Register today.

In early June, border officials quietly deployed the mobile app CBP One at the U.S.-Mexico border to streamline the processing of asylum seekers. While the app will reduce manual data entry and speed up the process, it also relies on controversial facial recognition technologies and stores sensitive information on asylum seekers prior to their entry to the U.S. The issue here is not the use of artificial intelligence per se, but what it means in relation to the Biden administrations pre-election promise of civil rights in technology, including AI bias and data privacy.

When the Democrats took control of both House and Senate in January, onlookers were optimistic that there was an appetite for a federal privacy bill and legislation to stem bias in algorithmic decision-making systems. This is long overdue, said Ben Winters, Equal Justice Works Fellow of the Electronic Privacy Information Center (EPIC), who works on matters related to AI and the criminal justice system. The current state of AI legislation in the U.S. is disappointing, [with] a majority of AI-related legislation focused almost solely on investment, research, and maintaining competitiveness with other countries, primarily China, Winters said.

But there is some promising legislation waiting in the wings. The Algorithmic Justice and Online Platform Transparency bill, introduced by Sen. Edward Markey and Rep. Doris Matsui in May, clamps down on harmful algorithms, encourages transparency of websites content amplification and moderation practices, and proposes a cross-government investigation into discriminatory algorithmic processes throughout the economy.

Local bans on facial recognition are also picking up steam across the U.S. So far this year, bills or resolutions related to AI have been introduced in at least 16 states. They include California and Washington (accountability from automated decision-making apps); Massachusetts (data privacy and transparency in AI use in government); Missouri and Nevada (technology task force); and New Jersey (prohibiting certain discrimination by automated decision-making tech). Most of these bills are still pending, though some have already failed, such as Marylands Algorithmic Decision Systems: Procurement and Discriminatory Acts.

The Wyden Bill from 2019 and more recent proposals, such as the one from Markey and Matsui, provide much-needed direction, said Patrick Lin, director of the Ethics + Emerging Sciences Group at California Polytechnic State University. Companies are looking to the federal government for guidance and standards-setting, Lin said. Likewise, AI laws can protect technology developers in the new and tricky cases of liability that will inevitably arise.

Transparency is still a huge challenge in AI, Lin added: Theyre black boxes that seem to work OK even if we dont know how but when they fail, they can fail spectacularly, and real human lives could be at stake.

Though the Wyden Bill is a good starting point to give the Federal Trade Commission broader authority, requiring impact assessments that include considerations about data sources, bias, fairness, privacy, and more, it would help to expand compliance standards and policies, said Winters. The main benefit to [industry] would be some clarity about what their obligations are and what resources they need to devote to complying with appropriate regulations, he said. But there are drawbacks too, especially for companies that rely on fundamentally flawed or discriminatory data, as it would be hard to accurately comply without endangering their business or inviting regulatory intervention, Winters added.

Another drawback, Lin said, is that even if established players support a law to prevent AI bias, it isnt clear what bias looks like in terms of machine learning. Its not just about treating people differently because of their race, gender, age, or whatever, even if these are legally protected categories, Lin said. Imagine if I were casting for a movie about Martin Luther King, Jr. I would reject every actor who is a teenage Asian girl, even though Im rejecting them precisely because of age, ethnicity, and gender. Algorithms, however, dont understand context.

The EUs General Data Protection Regulation (GDPR) is a good example to emulate, even though its aimed not at AI specifically, but on underlying data practices. GDPR was fiercely resisted at first but its now generally regarded as a very beneficial regulation for individual, business, and societal interests, Lin said. There is also the coercive effect of other countries signing an international law, making a country think twice or three times before it acts against the treaty and elicits international condemnation. Even if the US is too laissez-faire in its general approach to embrace guidelines [like the EUs], they still will want to consider regulations in other major markets.

The rest is here:

AI legislation must address bias in algorithmic decision-making systems - VentureBeat

Posted in Ai | Comments Off on AI legislation must address bias in algorithmic decision-making systems – VentureBeat

Die as a human or live forever as a cyborg: Will robots rule the world? – Sydney Morning Herald

Posted: at 7:52 am

Normal text sizeLarger text sizeVery large text size

In movies, theyre the bad guys killer cyborgs with bones of steel and lightning-fast reflexes, perhaps an Austrian accent too. But Peter Scott-Morgan has never been afraid of robots. As a scientist and roboticist by trade, he spent decades researching how artificial intelligence (AI) might transform our lives.

Then, in 2017, Dr Scott-Morgan was diagnosed with motor neuron disease, the same paralysing condition that killed Stephen Hawking. Months after puzzling over his wonky foot falling asleep, he was told he had two years to live.

He had other ideas. To survive, he would turn to the technology he had spent his career researching. He would become the cyborg. Scott-Morgan has now had two major surgeries to help keep himself alive with robotics machine upgrades that breathe for him, help him speak, and hopefully will even see him stand again as the advancing paralysis traps him inside his body. He plans to merge his brain with AI eventually too, so he can speak with his thoughts rather than the flicker of his eyes. And Im OK with giving up some control to the AI to stay me, he says. Though that might change what it means to be human ... Theres a long tradition of scientists experimenting on themselves. But die as a human or live as a cyborg? To me, its a no-brainer.

But what about the rest of us? Is humanity destined to merge with machine? We keep hearing that the robots are coming to take our jobs, how likely are they to stage a coup? And why are Facebook and Elon Musk already building machines to read our thoughts?

Credit:Illustration: Matt Davidson

A century ago, a Spanish scientist mapped the human brain and uncovered a hidden kingdom. As microscopes began to peer deeper into that mass of little grey cells, Santiago Cajal lay bare the wiring within, so dense he called it a jungle. It is from his detailed drawings that the world understood neurons for the first time and how they exchange information in a tangled network, giving rise to the senses, the emotions and possibly even consciousness itself.

Decades later, a philosopher and a young, homeless mathematician wondered if that network could be broken down into the most fundamental binary of logic: true or false. Neurons could, after all, be considered on or off, firing a signal or not. This theory, by Warren McCulloch and Walter Pitts at the University of Chicago, proved to be an incomplete model for the human brain, too simple to capture all the strange magic really going on inside. But it did give rise to the binary code of computers those ones and zeroes now form infinite variations of on or off to tell machines what to do. Scientists have been trying to bring computers closer to human brains, at least in function, ever since.

Because machines interpret the world through this binary code, and algorithms (rules made from that code), they are good at a lot of specific things we find difficult, such as solving complex equations fast (and playing chess better than a grandmaster). Yet they often struggle with the mundane things we, with our more complex, adaptable thinking centres, find easy: recognising facial expressions, making small talk and, most of all, improvising.

To overcome this, machine learning models seek to train computers to categorise and then react to things themselves rather than waiting on human programming. Over the past decade, one such model known as deep learning has charged beyond the rest, fuelling an AI boom. Its why your iPhone can recognise your face and Alexa understands you when you ask her to switch on the lights. And deep learning did it by going back to Cajals neural jungle. The learning is said to be deep because a machine is trained to classify patterns by filtering incoming information through layers of interconnected neuron-like nodes.

Im sorry, Dave, Im afraid I cant do that. In the 1968 sci-fi classic 2001: A Space Odyssey, a computer called HAL (Heuristically programmed ALgorithmic) takes over a spaceship.Credit:Fair Use

While these artificial networks take a staggering amount of data to train compared to a human brain, experts such as Scott-Morgan hope they will only get better and more efficient as computing power increases (it is roughly doubling every two years). Already, AI can translate speech, trade stock, and perform surgery (under supervision). Since his own surgical journey was documented in the British documentary Peter: The Human Cyborg, Scott-Morgan has been upgrading to a very Hollywood cyborg-like interface that uses AI to track the movement of his eyes across its screen with tiny cameras and then offers up phrases for his robot voice to say predictive text based on the letters he has spelt out so far.

As UNSW professor of AI Toby Walsh points out, machines are not limited by biological processing speeds the way humans and animals are. But others suspect that the capability of even this kind of AI is about to hit a wall. At the University of Sheffield, computer scientist James Marshall says deep learning networks are still based on a cartoon of how the [human] brain works. They are not really making decisions, because they do not understand for themselves what matters and what doesnt. That means theyre fragile. To tell a picture of a cat from a dog, for example, an AI needs to sift through a huge trove of images. While it might pick up tiny changes that would escape the notice of a human, such as a few pixels out of place, these tiny changes usually dont matter a lot because we understand the main features that set a cat apart from a dog. But suddenly you change some pixels and the AI thinks its a dog, Marshall says. Or if it sees a drawing of a cat or a cat in real life [in 3D] it might have to start from scratch again.

The tendency of AI, however powerful, to break in unexpected ways is part of the reason those driverless cars we keep being promised are yet to arrive. Machines can be fooled even into seeing things that arent really there driverless cars tricked into accelerating past stop signs when the addition of a few stickers on the sign makes them instead perceive increased speed limits; or facial recognition programs duped into skipping past suspects wearing wigs and glasses.

Any AI network is vulnerable to this kind of manipulation, and if hackers know its weak points they can do more than break it, they can hijack it to perform a new task entirely. Of course, AI can be trained to identify and resist this kind of sabotage too but, at some point, it will encounter a problem it hasnt prepared for.

Perhaps a little paradoxically, some experts say that a way to give deep learning more common sense is to fuse it with the old, more rigid form of AI that came before it, where machines used hard-coded rules to understand how the world worked. Others say deep learning needs to become more flexible yet, writing its own algorithms and programs to perform new functions as it needs to, even testing its actions in the real world through robotics (or at least very good simulators) to help it understand causality. Amazons new line of Alexa assistants look through a camera to better understand the world (and their owners).

But I dont think [deep learning] will ever work for driverless cars, Marshall says. When you have to build a more and more complicated machine for a fairly simple task, maybe the machine is built wrong.

Arnold Schwarzenegger (and his iconic Austrian accent) starred as a killer cyborg in The Terminator franchise.Credit:

Marshall is flying a drone around his lab. Its not bumping into walls, the way drones normally do when trying to distinguish one beige, slab of office wallpaper from another. This drone has a tiny chip in its brain holding an algorithm borrowed from a honeybee. It tells it how to navigate the world as the insect does.

At Marshalls lab in Sheffield, now a company offshoot of his university called Opteran, the team is trying something new modelling machine thinking on animals. Marshall calls it natural intelligence, not artificial intelligence. Autonomy, the kind driverless cars and robot vacuums need to navigate their surrounds, is a solved problem, he says. It happens all the time in the natural world. We require very little brain power ourselves to drive, most of the time were on autopilot.

Bees have a less formidable number of neurons than humans about a million, next to tens of billions and yet they can still perform impressive behaviours: navigating, communicating and problem-solving. Marshall has been mapping their brains, training them to perform tasks such as flying through tunnels and then measuring their neural activity; making silicon models of different regions of their brain according to their function and then converting that into algorithms his machines can follow.

Its like a jigsaw puzzle, Marshall says. We havent mapped it all yet, even those million neurons still interact in really complex ways.

So far, he has converted into code how bees sense the world and navigate it, and is busy finalising algorithms from the decision-making centre of their brains. Unlike Cajol, hes not looking to record all the exquisite detail that keeps the brain alive. We just need how it does the function we want. We dont just reproduce the neurons, we reproduce the computation.

When he first put his bee navigation algorithm in the drone, he was stunned at how much it improved, changing course as people moved around it, as walls came closer. Thats when we saw it could work, he says. But because everyone is focused on deep learning, we decided to make our own company to scale it up.

Marshall is also mapping the brains of ants to improve ground-based robots, imagining a world in which autonomous devices are as common as computers, cleaning and improving the world around us. And as machines get smaller smaller even than the head of a pin or the width of a human hair scientists hope they may help fight disease in the body too, cleaning blood or killing cancer and infection. Perhaps one day these nanobots could even repair the nerves fraying apart in people with motor neuron disease such as Scott-Morgan, or keep humans alive longer.

Marshall hopes to eventually look into the brains of larger animals too, including primates. There scientists might find more complex functions again, beyond just autonomy, and into advanced problem-solving, even moral reasoning. Still, just as Marshall is sure his robot bee is not a real bee, he doubts wed be able to reproduce an entire human brain in silico and fire it up to see if some kind of consciousness springs to life. A lot of this research comes out of that very question: could we just replicate the brain somehow, suppose we had a 3D printer, Marshall says. But the brain isnt just its neurons, its how it all interacts. And we still dont understand it yet.

In his latest book Livewired, US neuroscientist David Eagleman describes in new detail the plasticity of the human brain, where neurons fight for territory like drug cartels. There may even be a kind of evolution, a survival of the fittest being waged within our minds day to day, as new neural connections are forged. Quantum scientists, meanwhile, wonder if reactions are happening inside the brain, at its smallest scale, which we cannot even measure. How then could we ever hope to replicate it accurately? Or upload someones consciousness to a machine (another popular sci-fi plot)?

Will Smith battles another pesky AI that thinks it knows best (and a few thousand robots) in the 2004 film I, Robot.

Of all the renderings of AI in science fiction, few occupy the minds of real-world researchers like the singularity a hypothetical (and some say inevitable) tipping point where machine intelligence growth becomes exponential, out of control. In the 1960s, British mathematician I.J. Good spoke of an intelligence explosion, and everyone from Stephen Hawking to Elon Musk has since weighed in.

The theory is that as soon as we have a system as smart as a human, and we allow it to design a system superior to itself, well kick off a domino effect of ever-increasing intelligence that could shift the balance of power on Earth overnight. Humans, who are limited by slow biological evolution, couldnt compete, and would be superseded, Hawking told the BBC in 2014.

And, if AI were ever smart enough to be put in charge and make decisions for us, as is imagined in films such as I, Robot and The Matrix, what if their radical take on efficiency involves enslaving or powering down humans (i.e. mass murder)? Remember the glowing red eye of Hal the AI in 2001: A Space Odyssey, who decided the best thing to do, when faced with a crisis far out in space, was to stage a mutiny against his human crew? Musk himself says that, for a powerful AI, wiping out the human race wouldnt be personal if we stood in their way, it would be a matter of course, like squishing an ant hill to build a road.

When we refer to intelligence in machines, we usually mean weve taught a computer to do something that in humans requires intelligence, Walsh says. As of 2021, those smarts are still very narrow beating a human in a game of chess, for example. AI enthusiasts point to machines helping write music or mimicking the styles of great painters as signs of burgeoning creativity, but such demonstrations still rely on considerable human input, and results are often random or spectacularly bad. The limits of deep learning again mean true spontaneity, originality, is lacking. At IBM, Arvind Krishna imagines you could train an AI on images of what is and isnt beautiful, good art and bad art, for example, but that would still be training the AI on the creators own tastes, not moulding a new artist for the world. Mostly, experts see machines becoming another tool to deepen human creativity and decision-making, revealing patterns and combinations that might have otherwise been missed.

Loading

Still, Walsh says theres no scientific or technical reason why the gap between human and machine intelligence couldnt close one day. Every time we thought that we were special, that the sun went around the Earth, that we were different than the apes, we were wrong, he says. So to think theres anything special about our intelligence, anything that we could not create and probably surpass in silicon, I think would be terribly conceited of the human race.

Indeed, machines have a lot of apparent advantages over us mere flesh bags, as Hawking alluded to. Theyre faster thinkers, with bigger, potentially infinite memories; they can network and interface in a way that would be called telepathy if a human could do it. And theyre not limited by their physical bodies.

In Scott-Morgans case, transforming into a cyborg has already come with unexpected benefits. He can no longer speak on his own Im answering these questions long after my body has stopped working sufficiently well to keep me alive, he writes instead but through his new robot voice, he can communicate in any language. In May, his digital avatar even broke into song during a live interview with broadcaster Stephen Fry. His wheelchair, meanwhile, will soon allow him to stand, so he will tower over his fellow mortals and hopefully, with the aid of an inbuilt AI, it will drive itself wherever Scott-Morgan wishes to go. (I envision being able to speed through an obstacle course or safely make my way through a showroom of porcelain vases.)

The hair of his avatar is never out of place and my powers will double every two years. Ill be a thousand times more powerful by the time Im 80. Hes working on programming in a maniacal laugh for his avatar, too.

Of course, because these AI networks are being built by humans, they may inherit the worst of us along with the best. Weve seen this already on platforms such as Facebook and YouTube where AI used to curate user content has been shown to veer sharply into extremism and misinformation. Or police surveillance networks learning their human developers cultural prejudices. And, because AIs operate using complex mathematics, they are often themselves a black box, hard to scrutinise. Experts, including the late Hawking, have stressed that regulation and ethical frameworks must catch up fast to the technology, so we can maximise its social good, not just profit margins.

But what we may learn too, is that theres a ceiling to how intelligent something can be. The universe is full of fundamental limits, Walsh says. It might not be [as simple as] we wake up one day and the computers can program themselves. I suspect that we will get smarter computers, but it will be the old-fashioned way, through our sweat, ingenuity and perseverance.

While Marshall doubts well ever create a machine that is itself conscious (along the lines of, say, the eloquently self-aware cyborgs in Blade Runner), he is wary of the new push for robots or algorithms that can evolve independently designed to breed the way computer viruses spread now and so rewrite and advance their own programming. I dont think thats the path, Marshall says. I think we need to always know what it does, and, if it can evolve on its own, well, life finds a way

How can you tell? Cyborgs called replicants are much like humans in the 1982 sci-fi film Blade Runner. Credit:Fair Use

Rather than turning to one all-knowing AI to run the show, many experts think it more likely we will draw on the power of machines to improve our own thinking. If we had a better way to connect with computers, closer than our screens, futurists wonder if we could surf the internet with our minds, back up our memories to the cloud, even download ready-formed skills such as a second language or another sense entirely like echolocation or infrared vision.

In 2020, Elon Musk was ruling out none of this when he introduced the world to a pig called Gertrude and the coin-sized computer chip in her brain he hoped would allow people to plug in directly to machines one day. Its kind of like a Fitbit in your skull with tiny wires, Musk said, conceding this is sounding increasingly like a Black Mirror episode. In 2021, a monkey with the same chip, made by Musks company Neuralink, was shown playing a game of ping-pong using only his mind to control a joystick.

Labs, including military labs, around the world have been developing neural implants for more than a decade, mostly to help people with paralysis operate robotic limbs and those with epilepsy head off seizures. In 2016, an implant connected to a robotic arm even gave back the sensation of touch as well as movement to a man paralysed from the neck down he used it to fist-bump president Barack Obama.

But this is still new technology, so far involving about 100 electrodes inserted into the brain that read its neural signals and send them wirelessly back to a machine. Neuralinks prototype has more than a 1000 electrodes, each smaller than a human hair, and grand claims of fast insertion into the skull using a robotic surgery (and no need for even a general anesthetic).

Plunging anything into the brain is risky and can cause damage. But in 2016 two neurologists at the University of Melbourne, Tom Oxley and Nicholas Opie, developed a clever technique to insert an implant without the need for open surgery using, Oxley says, the veins and blood vessels as the natural highway into the brain. Theyve just received $52 million in funding from Silicon Valley to run more clinical trials of their own chip, called the Stentrode, in the US. Its about the size of a paperclip and in Melbourne its helped patients with motor neuron disease text, email and bank online by thought alone.

Loading

Neuralinks end goal is to develop a non-invasive headset instead of a chip but for now such external devices pick up a much weaker signal from the brain. Facebook, meanwhile, is looking at wearable wrist devices that would read your mind, literally, where nerves carry messages down to your hands, eventually allowing users to do away with the traditional mouse and keyboard and type at a speed of 100 words per minute just by thinking. Like Neuralink, helping patients with paralysis is their first goal, but they also plan to scale up to everyday users. Already, researchers funded by Facebook have managed to translate brain waves into speech with an accuracy rate of between 61 and 76 per cent (that beats Google Translate in some cases), using existing electrodes implanted in the brains of patients with epilepsy.

Some of this work being done by Facebook and Musk is right out on the edge for enhancement, says the chief executive of Bionics Queensland, Robyn Stokes, but it will likely benefit health applications along the way. Just as brain chips could become digital assistants of the mind, she imagines they could also help manage mental health conditions such as serious depression. Those sorts of brain computer interfaces are really advancing quickly, she says, pointing to the Strentrode. She expects an implant that can perform many functions inside the body, beyond reading brainwaves, will soon follow.

Even then, there are still concerns. While the brains now-famed plasticity could help it rewire around implants, for example, some experts warn it could also mean it quickly forgets how to perform important functions, if they are taken over by machines. What then if something fails?

Peter Scott-Morgan tries out AI technology that tracks his eye movements to spell out his speech.Credit:Cardiff Productions

Still, enthusiasts, or transhumanists, imagine the next stage of human evolution will inevitably be technological future generations can expect reinforced bones and improved brain power thanks to cybernetic upgrades. In British drama Years and Years, a new parental nightmare plays out as a daughter announces she wants to upload her mind and live as a machine. (I dont want to be flesh. I want to escape this thing and become digital.)

In his first book on robotics in 1984, long before his disease had emerged, Scott-Morgan himself considered how AI might unlock human potential, and vice versa. AI on its own is like a brilliant jazz pianist, but without anyone to jam with, he says now. Its nowhere near its full potential. The duet of human and AI, meanwhile, would seem close to magic ... a mutually dependent partnership, not a rivalry. And, to his mind, it could well be the only route that doesnt lead to a dead end. I anticipate that otherwise therell be a crippling backlash against whats typically perceived as the uncontrolled rise of raw AI.

Scott-Morgan plans for his eye-controlled communication interface to rely more and more on its underlying AI to generate his speech. That means sometimes what comes out will not be what biological Peter was planning to say. And Im very comfortable with that. I keep reassuring [everyone] I have absolutely no qualms about technology potentially making me appear cleverer, or funnier, or simply less forgetful, than I was before.

Others imagine a greater fusion of robotics, especially nanotech, with animals too. Already parts of nature are being re-engineered as technology in the lab from viruses repurposed as vaccines and computer chips that mimic the function of human organs to a robot-fish hybrid sent down as a deep-sea probe to collect data beneath the waves. Both the US and Russian armies have kitted out trained dolphins as underwater spies over the years, so perhaps its no surprise military researchers have been looking at going further even putting mind-controlling brain chips into sharks next. And, if bees die out, some experts say cyborg insects may be needed to pollinate plants in their place. All this again raises the strange question of when something is alive, or conscious, and whether we are building better robots or creating new life entirely.

The Terminator robots have no plans to co-exist to humans. They want the whole planet.Credit:Fair Use

Even if we dont get shark cyborgs, low-cost lethal machines are already changing the face of warfare. Imagine fighter drones talking to one another to find bombing targets, instead of a human pilot back at a base. Or swarms of explosive drones slamming themselves into people and buildings.

These are not visions of the future but news stories from 2020. According to a recent UN report, Turkish drones, packing explosives and facial recognition cameras, were sent out by Libyas army in 2020 to eliminate rebels via swarm attack in Tripoli, without requiring a remote connection between drone and base. They were, effectively, hunting their own targets. And the tech on board was not much more impressive than what youd find on a smartphone. Meanwhile, the Poseidon is a new class of robotic underwater vehicle that Russia is said to have already made, which can travel undetected and launch cobalt bombs to irradiate entire coastal cities all unmanned.

Loading

Machines that decide to kill like this, based on their sensors and a pre-programmed target profile, are making humanitarian groups increasingly nervous. The International Committee of the Red Cross wants the worlds governments to ban fully autonomous weapons outright. ICRC president Peter Maurer says they will make it difficult for countries to comply with international law, in effect substituting human decisions about life and death with sensor, software and machine processes.

Walsh agrees autonomous killer robots raise a host of ethical, legal and technical problems. If things go wrong or they break international law, who is held accountable? Should it be the programmer, the commander or the robot on trial for war crimes? Theyre not sentient, theyre not conscious, they cant have empathy, they cant be punished, Walsh says. And that takes us to a very, very dark place. It would be terribly destabilising and would change the speed and scale of war.

Of course, he adds, autonomous systems built for defence, such as the robots used to clear landmines, show that AI can reduce casualties in war too. And computers will continue to come online that can process battlefield data and make recommendations faster than humans ever could. But [we need] human oversight, human judgment, which is still significantly better than machines, at least today, Walsh says.

Loading

He thinks we should ban lethal autonomous weapons as we have chemical and biological weapons (as well as blinding lasers and cluster munitions), with enforcement powers for the UN to check no rogue state is stepping out of line.

The problem is that such bans rarely happen before things get ugly. For chemical weapons, it took the horrors of the First World War.

Im fearful that we wont have the initiative to do the same here until weve seen such weapons being used, Walsh says. A swarm of robot drones, hunting down humans and killing them mercilessly. It will look like a Hollywood movie.

Read more:

Die as a human or live forever as a cyborg: Will robots rule the world? - Sydney Morning Herald

Posted in Ai | Comments Off on Die as a human or live forever as a cyborg: Will robots rule the world? – Sydney Morning Herald

The Pentagon Scrubs a Cloud Deal and Looks to Add More AI – WIRED

Posted: at 7:52 am

Late in 2019, the Pentagon chose Microsoft for a $10 billion contract called JEDI that aimed to use the cloud to modernize US military computing infrastructure. Tuesday, the agency ripped up that deal. The Pentagon said it will start over with a new contract that will seek technology from both Amazon and Microsoft, and that offers better support to data-intensive projects, such as enhancing military decisionmaking with artificial intelligence.

The new contract will be called the Joint Warfighter Cloud Capability. It attempts to dodge a legal and political mess that had formed around JEDI. Microsoft competitors Amazon and Oracle both claimed in lawsuits that the award process had been skewed. In April, the Court of Federal Claims declined to dismiss Amazons suit alleging that bias against the company from President Trump and other officials had nudged the Pentagon to favor Microsoft, creating the potential for years of litigation.

The Pentagon announcement posted Tuesday didnt mention JEDIs legal troubles but said the US militarys technical needs had evolved since it first asked for bids on the original contract in 2018. JEDI included support for AI projects, but the Pentagons acting chief information officer, John Sherman, said in a statement that the departments need for algorithm-heavy infrastructure had grown still further.

Our landscape has advanced, and a new way ahead is warranted to achieve dominance in both traditional and nontraditional war-fighting domains, Sherman said. He cited two recent AI-centric programs, suggesting that they would receive better support from the new contract and its two vendors.

One is called Joint All Domain Command and Control, which aims to link together data feeds from military systems across land, sea, air, and space so that algorithms can help commanders identify targets and choose among possible responses. In an Air Force exercise linked to the program last year, an airman used a VR headset and software from defense startup Anduril to order real air defenses to shoot down a mock cruise missile over White Sands Missile Range in New Mexico.

Sherman also suggested that JWCC would help a project announced last month to accelerate AI adoption across the Pentagon, including by creating special teams of data and AI experts for each of the agencys 11 top military commands.

The Pentagons claim that it will better support advanced technology like AI projects shows President Bidens Pentagon continuing an emphasis on the military potential of artificial intelligence that began during the Obama administration and continued under President Trump. Successive secretaries of defense have said tapping that potential will require better connections with tech industry firms, including cloud providers and startups. However, some AI experts fear more military AI could have unethical or deadly consequences, and some tech workers, including at Google, have protested Pentagon deals.

Andrew Hunter, director of the Defense-Industrial Initiatives Group at the Center for Strategic and International Studies, says the Pentagon appears to have decided that because of its legal tangles, a reboot was the most efficient way to get the cloud computing resources the department has needed for some time.

Computing-dependent projects like the one seeking to link various military services and hardware are central to the Pentagons strategy to face up to China. The potential of cloud computing is to be able to apply sophisticated analytical techniques such as AI on your data so you can act with greater knowledge than adversaries, Sherman says.

JEDI was not the Pentagons only cloud computing contract, but the speed with which its successor can get up and running could still have a significant effect on the Pentagons cloud and AI dreams. Had all gone to plan, the initial two-year phase of JEDI was to have been completed in April. Hunter expects the department to try to finalize the contract quicklybut also to take care to avoid a repeat of the controversy around JEDI.

View post:

The Pentagon Scrubs a Cloud Deal and Looks to Add More AI - WIRED

Posted in Ai | Comments Off on The Pentagon Scrubs a Cloud Deal and Looks to Add More AI – WIRED

Boy with severe eczema begs to be ‘put into coma’ to escape pain of condition – Metro.co.uk

Posted: at 7:48 am

Barney Raes painful eczema (Picture: Mercury Press & Media)

A 14-year-old boy with severe eczema has begged to be placed into an induced coma to escape the pain of the skin condition.

Barney Rae, from Bristol, Avon, was diagnosed last year, and the eczema has left him covered head to toe in itchy rashes and unable to sleep.

Despite trying many different remedies some which mum Miranda, 50, claims left him looking like hed been in an acid attack Barney is still in agony, telling his mum that he cant deal with the pain anymore.

The single mum is now desperate to help her beloved son get back to his normal self, by raising money for urgent and fast-tracked treatment to calm his skin once and for all.

Miranda, a radio broadcast manager, said: Barney is at his wits end. He just wants to go to sleep and wake up when the eczema is all gone.

Hes even said to me that he wants to be put in an induced coma because the pain is that bad.

Hes got to the point where hes too scared to sleep. He scratches himself unknowingly when hes asleep and will wake up bleeding head to toe.

Mum Miranda has said the past 10 months have been torture for Barney, because of his sleep deprivation and the fact he even blames himself for his skin issues.

Its heartbreaking to see my child go through this, especially at an age where hes so aware of what he looks like, she said.

Because Barney is allergic to many ingredients in home remedies and over-the-counter creams, hes struggled to find relief from the itching and cracked skin.

Prescribed creams also left him red and in excruciating pain, and long NHS waiting lists mean the family are now looking at private options.

Miranda said: Its got to the point now where Im so worried about him. He never used to have eczema on his face until now its everywhere and hes so self conscious about it.

Weve been putting bandages on to help him stop scratching but one night he came into me shocked and it looked like he had seen a ghost.

He was shaking and bleeding from his neck downwards, saying he couldnt believe what hed done to himself.

Because of the exhaustion, he doesnt realise hes itching and scratching himself red raw. Its a completely uncontrollable urge.

The side effects both short term like nausea and sickness, and long term like brittle bones of his current medication are difficult to deal with.

Oral steroids and leukaemia drug, Methotrexate,can be dangerous for Barneys immune system, meaning he has to isolate while he takes them. And after he finishes courses of meds, he often finds that his skin goes back to how it was before.

Miranda has now set up a fundraiser to raise money for a private specialist medical consultation which costs 300, and means she will have to pay for any prescribed treatment.

She added: Ive got to do whatever I can to improve Barneys situation and Im willing to do whatever it takes.

I just want to see him back to his normal self and I would give anything to take it away.

Seeing your child suffer like this on a daily basis is horrific.

Donate to Barneys private eczema treatment here.

Do you have a story youd like to share?

Get in touch at MetroLifestyleTeam@metro.co.uk.

MORE : Strangers around the world help disabled teen find favourite toy that was discontinued 10 years ago

MORE : Can essential oils really help to boost your wellbeing?

See the original post:
Boy with severe eczema begs to be 'put into coma' to escape pain of condition - Metro.co.uk

Posted in Eczema | Comments Off on Boy with severe eczema begs to be ‘put into coma’ to escape pain of condition – Metro.co.uk