Hoyer Announces Winners of the Fifth Annual Fifth District Congressional App Challenge – Bay Net

Washington, D.C. - On December 20, 2019 Congressman Steny H. Hoyer (MD-05) announced the winners of the Fifth Annual Congressional App Challenge for the Fifth District.

This year, Patuxent High School student Matthew Hunter won first place with his app, The Art of Cryptography. Matthew's app aims to help individuals understand cryptography and allows users of the app to decode cyphers.

Karley Trinidad and Aubrey Zeltwanger, both from Patuxent High School, won second-place for their app, Safer Together, which provides students, teachers, and administrators with specific instructions during various types of school emergencies.

Patuxent High School student Alyssa Mazzone and Gwynn Park High School student Kehniah Watts tied for third place. Alyssa's app, Striving and Driving, is a memory challenge game. Kehniahs app, NaxaNow, teaches students about the opioid epidemic and provides resources to help those struggling with addition.

"I join in congratulating the winners and everyone who participated in the fifth annual Fifth District Congressional App Challenge," said Congressman Hoyer. "I was extremely impressed by the creativity displayed by students this year, as well as the hard work and dedication they put into each of their apps. I also applaud their efforts to address issues confronting our communities, such as public safety and opioid abuse. Congratulations to Matthew, Karley, Aubrey, Alyssa, and Kehniah for their winning apps, and I encourage all Fifth District students to consider participating in next year's competition."

The Congressional App Challenge was established by the U.S. House of Representatives in 2013 as a nationwide event which invites middle and high school students from all participating Congressional districts to compete as individuals or groups up to four. Students work to create and present an original software application, or app, for a mobile, tablet, or computer platform of their choosing.

The contest is modeled after the long-successful Congressional Art Competition and is designed to promote innovation and engagement in STEM education fields. Students who live in or are eligible to attend public schools located in Marylands Fifth Congressional District were invited to join the Fifth District App Challenge. The winning app will be placed on display in the U.S. Capitol alongside other winners from the nation. Additionally, the first place winner will receive $250 Amazon Web Services credits and receive an invitation to the #HouseofCode Capitol Hill Reception in Washington DC.

For more information about our releases, please contact Annaliese Davis at 202-225-4131.

Go here to see the original:
Hoyer Announces Winners of the Fifth Annual Fifth District Congressional App Challenge - Bay Net

Meet 22-year-old Pratap from Mandya, who has built over 600 drones, and is known as the Drone Scientist – EdexLive

Recently, when floods ravaged major parts of North Karnataka and people were stranded in different places,Pratap NMused the drone he made to provide food and relief materials to several affected areas. From Hipparagi Barrage to Janwada, a nearby village he used his drone to help many. Thousands of people gathered to watch if this drone could really reach the right place. And when it did, both police personnel and the public cheered loudly for the 22-year-old. Originally from the Mandya district, Pratap is a BSc graduate from JSS College of Arts, Science and Commerce in Mysuru.

He is popularly known as the Drone Scientist or the Youngest Scientist in India. A fitting name, we think, considering he thought about building drones when he was just 14 years old. When he was 16, he already had a drone in his hand ready to fly. "Have you seen an eagle, whose eyes are sharp and flight precise? It was this bird that inspired me to build a drone. The late Dr APJ Abdul Kalam also served as an inspiration as he achieved a lot in his lifetime. The first drone that I built was a basic one which could simply fly and capture some images. As I learnt more about technology and how drones can be helpful, I built bigger drones. To date, I have built around 600 drones," he says.In 2017, Pratap was recognised on several national as well as international platforms for his work. I exhibited one of my drones at Skills India and won second place. I exhibited a self-made project called Drones in Cryptography. The Germans used cryptography to send coded messages about bombings, especially during the time of Adolf Hitler, the dictator. Usually, radar signals can trace drones, but if you send messages or signals through cryptography, you can neither detect them nor decode the encrypted message," he explains.This young scientist has been invited to over 87 countries to showcase the different drones he has built.

When we ask him about the funding required to fuel his passion, he says, "I use very little money and a lot of e-waste to make my drones. Whenever I win competitions, I am awarded money which I save for the future. And as far as e-waste goes, a lot of it is generated and I get it from electrical shops in Mysuru, Visakhapatnam, Mumbai and a few other cities. For example, if there is a mixer-grinder that is defunct, I can remove the motor and use it in my drone. Similarly, I make use of chips and resistors from broken televisions to build my drones. It doesn't matter what the prototype looks like. Proving the technical points of the drone is all that matters."

Pratap has won young scientist awards from Japan and France and gold medals for his research on drones from Germany and the USA. among others. But he had to face several challenges before he could earn these recognitions. Being the son of a farmer, Pratap comes from a poor family and could hardly afford to buy good clothes for himself. "When I travelled to France for the first time, people were shocked and judged me for travelling in business class. However, this did not matter to me. One of the companies in France offered me an opportunity to work on their research project. I earned some money there and contributed to the improvement of my family's financial condition. Currently, the drones I am building now are funded by the money that I earned in France," he says happily.Eagle 2.8, the saviourPratap feels happy that his creation saved the life of a little girl in Africa. Narrating the series of events, he says, "Africa is home to many indigenous people and species. There is a dangerously poisonous snake called the black mamba in this country. In one year, around 22,000 people in a particular tribal area had died due to this snakes bite. When I was in Sudan for a research project, an eight-year-old girl was bitten by this snake and needed urgent medical assistance. Usually, a person can survive for only 15 minutes after being bitten by this snake.I used a drone to send the antivenom to the place where she was,a place so remote that you wont even be able to find its location on Google Maps. The place was 10 hours by road from where I was, so I used my Eagle 2.8 drone, which can cover 280 km per hour. The antivenom was delivered within eight and a half minutes. It was a very challenging task for me. Later, the child and her mother came all the way to Sudan to meet me and thanked me for saving her life. I was very happy that I could help."

Pratap has also delivered a few lectures at IIT Bombay and IISc on how drones can be used in time-sensitive situations like transferring of organs during organ donation, blood transfer and other such purposes. Pratap says, "When my lecture was held in these institutes for the first time, only three or four people attended. But these few people told the others about me and my talks, so when the lectures were organised again, the hall was jam-packed." Currently, Pratap is working to establish his own start-up that can involve youngsters to build drones or any other devices. According to him, there are several people out there who have the talent, but don't have the degree. "I will employ such talents and bring out many innovative devices that can help the nation during disasters and wars and in the fields of defence, aviation and beyond. The aim is very simple, it is to use technology in the interest of our nation."

See more here:
Meet 22-year-old Pratap from Mandya, who has built over 600 drones, and is known as the Drone Scientist - EdexLive

10 Years Of Bitcoin Breakthroughs And Bombshells – Forbes

If the 1980s marked the rise of personal computing and the 1990s and the 2000s the ascendance of internet connectivity, the 2010s will be known as the decade in which bitcoin and other blockchain-based cryptocurrencies started to change the way the world moves value. Bitcoin is a unique monetary asset in that it is free from the control of any central bank and doesnt need to be audited by a third party to ensure its value. It is instead tracked and verified by thousands of computers on a shared network that is called a blockchain, which uses cryptography to reach consensus. Bitcoins underlying distributed ledger technology has the potential to not onlydramatically overhaul finance, but also fields ranging from property ownership to health care and voting.

As the decade began, only a handful knew anything about bitcoin, which was created on January 3, 2009 by a mysterious developer known to the world as Satoshi Nakamoto. Buried within the code of his Genesis block, Nakamoto embedded a reminder of what happens when too much trust is placed in banks. It was a headline from a British newspaper announcing the imminent bailout of the financial institutions following their collapse in 2008 and 2009: The Times 03/Jan/2009 Chancellor On Brink of Second Bailout for Banks.

The first full decade of digital currency without banks kicked off a year later, on January 1, 2010, with the mining of bitcoin block number 32620, which rewarded its miner with 50 bitcoin, then worth less than a single penny but now worth about $375,000.For the first five years or so, bitcoin largely went unnoticed, appealing to a fringe group of programmers and libertarian idealists.During the latter half of the decade, as more people came to understand the benefits of blockchain technology, bitcoin and other cryptocurrency speculation took off.

The ascendance of bitcoin has not been without its setbacks. The nascent technology has been plagued by infamous scandals like the FBIs shuttering of underground bitcoin-fueled drug and contraband marketplace Silk Road in 2013 and the $450 million hacking of cryptocurrency exchange Mt. Gox in 2014.At the end of 2017, the frenzy in cryptocurrency speculation peaked with bitcoin at $20,000 and produced thousands of worthless cryptocurrency tokens and investor losses in the billions.

Click here to learn about the next class of Blockchain 50 innovators.

Bitcoin believers have not given up their idealistic visions for an economy outside the influence of central control. A budding ecosystem has arisen, powered by new computer languages and breakthroughs in cryptography that is only now starting to change the way banks, corporations and governments operate. In fact in a bit cruel irony, some of the most enthusiastic supporters and co-opters of blockchain technology today are those same big companies, ranging from IBM and Cargill to JPMorgan, that early bitcoin believers were rallying against.

13 Moments Defining The Future Of Money

May 22, 2010

Programmer Laszlo Hanyecz buys two Papa Johns pizzas for $25 using 10,000 bitcoin, establishing its first market-based price: $0.0025

June 2012

Ripples XRP is born when co-founder Arthur Britto submits code limiting tokens to 100 billion. Ripple now competes with Swift.

March 2013

Failing Cypriot banks threaten to seize deposits, triggering global interest in digital currency. Bitcoin soars 40% to $80.

October 2013

The FBI shutters online black market and drug bazaar Silk Road. Its boss Ross Ulbricht gets a life sentence.

February 2014

Japanese Bitcoin exchange Mt Gox is hacked and $460 million is stolen. Bitcoin drops 20% to $400.

December 2014

Following Overstocks lead, Microsoft starts accepting bitcoin for Xbox games. Bitcoin ends 2014 at $312.

May 2015

Second Market founder, Barry Silbert, launches Grayscales bitcoin trust, a securitized bitcoin for accredited investors only.

July 2015

Waifish Canadian Vitalik Buterin launches the Ethereum blockchain, enabling decentralized applications. Ether debuts at $3.

December 2015

JP Morgan refugee Blythe Masters co-founds Hyperledger with institutions including IBM, to make blockchain software for enterprises.

October 2016

Privacy coins gain traction after Zcash launches with zero-knowledge proofs, now being explored by the likes of JP Morgan.

December 2017

Amid crypto trading frenzy, Cboe launches bitcoin futures allowing investors to short bitcoin. Bitcoin price: $15,300.

December 17, 2017

Bitcoin bubble peaks at $19,902, up 2,100% in 2017. Ripple CEO Chris Larsens net worth briefly hits $37 billion.

June 18, 2019

Facebook announces libra, a cryptocurrency partially backed by dollars. China announces its own crypto.

See more here:
10 Years Of Bitcoin Breakthroughs And Bombshells - Forbes

Girls Go CyberStart competition will return to Indiana for third year – pharostribune.com

INDIANAPOLIS A national competition designed to encourage girls to pursue cyber-based learning and career opportunities is coming back to Indiana.

Gov. Eric Holcomb announced Monday that the Girls Go CyberStart competition will take place in Indiana for the third year on Jan. 13. The competition, hosted by the SANS Insitute, centers on a fun and thought-provoking game to inspire young women to test their aptitude in cyber skills. Female students in grades 9-12 can participate for free, either as individuals or as part of a school-based team.

Indiana was one of 27 states to participate in last years competition. More than 10,300 girls competed, including more than 800 Indiana high school students. Four teams from Indiana scored among the top 50 high schools nationally.

Cyber jobs in the United States have increased by 75% since 2010, and one million of those jobs are unfilled nationwide. Indiana has an estimated 2,300 jobs unfilled, according to the Cyberseek jobs tool.

Training young Hoosiers in cybersecurity and tech-based skills is essential to improving Indianas cyber-resiliency for decades to come, Holcomb said in a statement. Indiana is a proven leader in cybersecurity, and our state is committed to providing the skills and opportunities Hoosiers need to pursue fulfilling careers in this high-demand field.

This year, 38 states will participate in the competition. Students will take on the roles of agents in the Cyber Protection Agency, where they will develop forensic and analytical skills and deploy them to sleuth through challenges and tackle various online cybercriminal gangs.

As they work their way through the game, players will be challenged to solve puzzles and be introduced to cybersecurity disciplines, including forensics, open-source intelligence, cryptography and web application security.

Registration for the competition is now open, and student practice programs are available.

View original post here:
Girls Go CyberStart competition will return to Indiana for third year - pharostribune.com

History, technology and the shackles of the present – The Hindu

For the historian of science and technology, the Narendra Modi governments ambitious push for electric vehicles (EVs) should ring a bell. After many decades, India is witnessing once again the unseemly fraternisation of high technology and authoritarian governance. On the one hand, the government has championed EV, Artificial Intelligence (AI), and packaged sundry technologies into neat acronyms. On the other, it has clipped Internet access to towns and villages when confronted with non-violent protests against the Citizenship (Amendment) Act, 2019.

In Indias case, history is merely repeating itself. In 1976, as India sank deep into the recesses of the Emergency, a group of bureaucrats and scientists sat down to ponder the future of technology in the country. The irony of analysing technologies that would unshackle the Indian economy, when basic rights of its citizenry were suppressed, was lost on the establishment. In fact, while the Indira Gandhi government built a surveillance state, Silicon Valley saw the birth of public key cryptography, used in modern-day encryption. India, it seemed, had regressed into the darkest chapter of its political history, just as the world began to use technology to preserve human rights.

This dissonance did not seem to bother the high-profile group that had been brought together by the National Committee on Science and Technology (NCST). Its mandate: study the outlook for India in 2000 A.D. The group, set up in 1973, took seven years to submit their report, publishing an interim document during the Emergency. The Indian governments commissioning a futures study was in step with the times. Futurology the use of computer models for forecasting scenarios became fashionable after the Club of Rome, a group of economists and planners, published its famous Limits to Growth report in 1972. The report painted a doomsday scenario of acute food and water scarcity in 2000. Unsurprisingly, this period also witnessed the new wave of science fiction, set in dystopic lands and featuring post-apocalyptic visions. Another kind of dystopia was unfolding in Indias present while the civil liberties of Indians were cast aside, the government was busy discussing EVs and self-driving cars.

It may seem straight out of the pages of a sci-fi novel, but the first official assessment of EVs in India was likely published during the Emergency. The Committee on Futurology, as it was known, analysed long-term projections for many sectors, including transportation. This sectors problems were two-fold. To begin with, there were just not enough vehicles for the larger public in India. Three decades after Independence, India had only 1,00,000 buses on its roads. (In other words, there was one bus for every 6,500 Indians). However, the number of cars and jeeps totalled nearly 750,000. In a still-impoverished country, the wealthy and powerful elite enjoyed vastly better mobility than the majority of the population.

Rising fuel prices presented the second problem. The NCST deliberated in the shadow of the oil crisis of 1973, brought on by a crude embargo imposed by the Organisation of the Petroleum Exporting Countries (OPEC). Faced with the problem of scarcity and costs, the committee argued India was better served in the long run by developing renewable alternatives to petrol.

Almost concurrently, western laboratories had begun exploring the development of lithium-ion batteries, critical to EVs. The work of John B. Goodenough, Akira Yoshino and M. Stanley Whittingham who were jointly awarded the 2019 Nobel Prize in Chemistry for the development of these batteries was catalysed by the oil crisis of the 1970s. The NCST appears to have been mindful of such efforts: it is imperative that some concentrated R&D is performed in the area of high energy-high power batteries, it declared. The Committee even predicted EVs and self-driving cars - adaptive, automobile autopilots, as the report termed it would be commercially available from the early 1980s.

Is it surprising the Indian government conjured up visions of technological advancement, while suppressing democracy? Hardly. Several autocratic regimes have tread down the same path, using technology as a totem to rally disaffected populations. But while the NCST made grand claims about the future, the government was actually clamping down on technology in the present. Indira Gandhis government, under pressure from labour unions, viewed computers with suspicion, and discouraged PSUs from adopting them. The Futurology Committees view too was jaundiced by the Emergency. Not all technologies were neutral and useful to society, the committee declared, citing the TV as an example. Meanwhile, Doordarshan had become an instrument of state propaganda. Faced with a financial crunch, the government also championed appropriate technologies that were small-scale solar cookers and mechanised bullock carts but did little to boost productivity. The left hand did not know what the right was doing: some sections of the government were trumpeting the arrival of self-driving cars, while others told the public to be wary of computers.

Despite this politicking over technology, Indians were, in fact, beginning to embrace machines. As C.R. Subramanian has noted, the import of computers tripled during the Emergency. The number of automobiles plying on Indian roads in the 1980s increased by a staggering 400% over the previous decade. The seeding of doubt against big technology by the government in the minds of citizens did little to improve prospects for scientific breakthroughs. If only Indians had the political agency to form their own views of technology, India may well have had a shot at developing EVs. It is a lesson todays government too should learn: one cannot aspire to a Digital India if technologies are wantonly used for mass surveillance, or cut off altogether when faced with non-violent, democratic protests.

Arun Mohan Sukumar is a PhD candidate at The Fletcher School, and the author of Midnights Machines: A Political History of Technology in India

You have reached your limit for free articles this month.

Register to The Hindu for free and get unlimited access for 30 days.

Find mobile-friendly version of articles from the day's newspaper in one easy-to-read list.

Enjoy reading as many articles as you wish without any limitations.

A select list of articles that match your interests and tastes.

Move smoothly between articles as our pages load instantly.

A one-stop-shop for seeing the latest updates, and managing your preferences.

We brief you on the latest and most important developments, three times a day.

Not convinced? Know why you should pay for news.

*Our Digital Subscription plans do not currently include the e-paper ,crossword, iPhone, iPad mobile applications and print. Our plans enhance your reading experience.

Read the rest here:
History, technology and the shackles of the present - The Hindu

No Real Value: Former Bitcoin Core Developer Peter Todd Asserts Ripples XRP Doesnt Need To Exist – ZyCrypto

Ripple has been one company working hard to revolutionize global funds transfer. The company has also been making efforts to boost XRP adoption, and it has already won over a good number of clients and partners willing to utilize XRP in cross-border payments. However, there have been some objections from various people who think XRP isnt really what its expected to be. One such person is Peter Todd, a former Bitcoin Core developer, and cryptography consultant.

In a recent post on Twitter, Todd shared views made by one Larry Cermak, another personality who thinks XRP doesnt have much value to investors.

According to Larry, Ripple will be the only beneficiary if ever its project works and its technology gets adopted. As such, its the people who will have accumulated XRP that will be left holding massive bags.

Larry went on to point out that Ripples continued sale of XRP has so far earned it around $1.2 billion. Ripple uses the proceeds of these sales to fund its projects as well as support strategic startups and partners like MoneyGram.

While sharing Larrys remarks about Ripple and XRP, Peter Todd opined that just like most ICOs, XRP doesnt really need to exist, claiming that the crypto itself doesnt offer any real rights of ownership to buyers.

However, the cryptography consultant agreed that Ripples technology could be useful as a fault-tolerant database, but it doesnt need XRP to work. For one, just like with any other cryptos, the early adopters gain when the tokens value increases as demand rises.

That said, its still not clear if XRPs value will be influenced by an impending bull run thats expected to boost Bitcoins market as the top coin prepares for the next block reward halving slated for May 2020. If the effect affects the rest of the market, XRP could gain in the process.

Get Daily Crypto News On Facebook | Twitter | Telegram | Instagram

See original here:
No Real Value: Former Bitcoin Core Developer Peter Todd Asserts Ripples XRP Doesnt Need To Exist - ZyCrypto

Artificial Intelligence Is Rushing Into Patient Care – And Could Raise Risks – Scientific American

Health products powered by artificial intelligence, or AI, are streaming into our lives, from virtual doctor apps to wearable sensors and drugstore chatbots.

IBM boasted that its AI could outthink cancer. Others say computer systems that read X-rays will make radiologists obsolete.

Theres nothing that Ive seen in my 30-plus years studying medicine that could be as impactful and transformative as AI, said Eric Topol, a cardiologist and executive vice president of Scripps Research in La Jolla, Calif. AI can help doctors interpret MRIs of the heart, CT scans of the head and photographs of the back of the eye, and could potentially take over many mundane medical chores, freeing doctors to spend more time talking to patients, Topol said.

Even the U.S. Food and Drug Administration which has approved more than 40 AI products in the past five years says the potential of digital health is nothing short of revolutionary.

Yet many health industry experts fear AI-based products wont be able to match the hype. Many doctors and consumer advocates fear that the tech industry, which lives by the mantra fail fast and fix it later, is putting patients at risk and that regulators arent doing enough to keep consumers safe.

Early experiments in AI provide reason for caution, said Mildred Cho, a professor of pediatrics at Stanfords Center for Biomedical Ethics.

Systems developed in one hospital often flop when deployed in a different facility, Cho said. Software used in the care of millions of Americans has been shown to discriminate against minorities. And AI systems sometimes learn to make predictions based on factors that have less to do with disease than the brand of MRI machine used, the time a blood test is taken or whether a patient was visited by a chaplain. In one case, AI software incorrectly concluded that people with pneumonia were less likely to die if they had asthma an error that could have led doctors to deprive asthma patients of the extra care they need.

Its only a matter of time before something like this leads to a serious health problem, said Steven Nissen, chairman of cardiology at the Cleveland Clinic.

Medical AI, which pulled in $1.6 billion in venture capital funding in the third quarter alone, is nearly at the peak of inflated expectations, concluded a July report from the research company Gartner. As the reality gets tested, there will likely be a rough slide into the trough of disillusionment.

That reality check could come in the form of disappointing results when AI products are ushered into the real world. Even Topol, the author of Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again, acknowledges that many AI products are little more than hot air. Its a mixed bag, he said.

Experts such as Bob Kocher, a partner at the venture capital firm Venrock, are more blunt. Most AI products have little evidence to support them, Kocher said. Some risks wont become apparent until an AI system has been used by large numbers of patients. Were going to keep discovering a whole bunch of risks and unintended consequences of using AI on medical data, Kocher said.

None of the AI products sold in the U.S. have been tested in randomized clinical trials, the strongest source of medical evidence, Topol said. The first and only randomized trial of an AI system which found that colonoscopy with computer-aided diagnosis found more small polyps than standard colonoscopy was published online in October.

Few tech startups publish their research in peer-reviewed journals, which allow other scientists to scrutinize their work, according to a January article in the European Journal of Clinical Investigation. Such stealth research described only in press releases or promotional events often overstates a companys accomplishments.

And although software developers may boast about the accuracy of their AI devices, experts note that AI models are mostly tested on computers, not in hospitals or other medical facilities. Using unproven software may make patients into unwitting guinea pigs, said Ron Li, medical informatics director for AI clinical integration at Stanford Health Care.

AI systems that learn to recognize patterns in data are often described as black boxes because even their developers dont know how they have reached their conclusions. Given that AI is so new and many of its risks unknown the field needs careful oversight, said Pilar Ossorio, a professor of law and bioethics at the University of Wisconsin-Madison.

Yet the majority of AI devices dont require FDA approval.

None of the companies that I have invested in are covered by the FDA regulations, Kocher said.

Legislation passed by Congress in 2016 and championed by the tech industry exempts many types of medical software from federal review, including certain fitness apps, electronic health records and tools that help doctors make medical decisions.

Theres been little research on whether the 320,000 medical apps now in use actually improve health, according to a report on AI published Dec. 17 by the National Academy of Medicine.

Almost none of the [AI] stuff marketed to patients really works, said Ezekiel Emanuel, professor of medical ethics and health policy in the Perelman School of Medicine at the University of Pennsylvania.

The FDA has long focused its attention on devices that pose the greatest threat to patients. And consumer advocates acknowledge that some devices such as ones that help people count their daily steps need less scrutiny than ones that diagnose or treat disease.

Some software developers dont bother to apply for FDA clearance or authorization, even when legally required, according to a 2018 study in Annals of Internal Medicine.

Industry analysts say that AI developers have little interest in conducting expensive and time-consuming trials. Its not the main concern of these firms to submit themselves to rigorous evaluation that would be published in a peer-reviewed journal, said Joachim Roski, a principal at Booz Allen Hamilton, a technology consulting firm, and co-author of the National Academys report. Thats not how the U.S. economy works.

But Oren Etzioni, chief executive officer at the Allen Institute for AI in Seattle, said AI developers have a financial incentive to make sure their medical products are safe.

If failing fast means a whole bunch of people will die, I dont think we want to fail fast, Etzioni said. Nobody is going to be happy, including investors, if people die or are severely hurt.

Relaxed AI Standards At The FDA

The FDA has come under fire in recent years for allowing the sale of dangerous medical devices, which have been linked by the International Consortium of Investigative Journalists to 80,000 deaths and 1.7 million injuries over the past decade.

Many of these devices were cleared for use through a controversial process called the 510(k) pathway, which allows companies to market moderate-risk products with no clinical testing as long as theyre deemed similar to existing devices.In 2011, a committee of the National Academy of Medicine concluded the 510(k) process is so fundamentally flawed that the FDA should throw it out and start over.

Instead, the FDA is using the process to greenlight AI devices.

Of the 14 AI products authorized by the FDA in 2017 and 2018, 11 were cleared through the 510(k) process, according to a November article in JAMA. None of these appear to have had new clinical testing, the study said. The FDA cleared an AI device designed to help diagnose liver and lung cancer in 2018 based on its similarity to imaging software approved 20 years earlier. That software had itself been cleared because it was deemed substantially equivalent to products marketed before 1976.

AI products cleared by the FDA today are largely locked, so that their calculations and results will not change after they enter the market, said Bakul Patel, director for digital health at the FDAs Center for Devices and Radiological Health. The FDA has not yet authorized unlocked AI devices, whose results could vary from month to month in ways that developers cannot predict.

To deal with the flood of AI products, the FDA is testing a radically different approach to digital device regulation, focusing on evaluating companies, not products.

The FDAs pilot pre-certification program, launched in 2017, is designed to reduce the time and cost of market entry for software developers, imposing the least burdensome system possible. FDA officials say they want to keep pace with AI software developers, who update their products much more frequently than makers of traditional devices, such as X-ray machines.

Scott Gottlieb said in 2017 while he was FDA commissioner that government regulators need to make sure its approach to innovative products is efficient and that it fosters, not impedes, innovation.

Under the plan, the FDA would pre-certify companies that demonstrate a culture of quality and organizational excellence, which would allow them to provide less upfront data about devices.

Pre-certified companies could then release devices with a streamlined review or no FDA review at all. Once products are on the market, companies will be responsible for monitoring their own products safety and reporting back to the FDA. Nine companies have been selected for the pilot: Apple, FitBit, Samsung, Johnson & Johnson, Pear Therapeutics, Phosphorus, Roche, Tidepool and Verily Life Sciences.

High-risk products, such as software used in pacemakers, will still get a comprehensive FDA evaluation. We definitely dont want patients to be hurt, said Patel, who noted that devices cleared through pre-certification can be recalled if needed. There are a lot of guardrails still in place.

But research shows that even low- and moderate-risk devices have been recalled due to serious risks to patients, said Diana Zuckerman, president of the National Center for Health Research. People could be harmed because something wasnt required to be proven accurate or safe before it is widely used.

Johnson & Johnson, for example, has recalled hip implants and surgical mesh.

In a series of letters to the FDA, the American Medical Association and others have questioned the wisdom of allowing companies to monitor their own performance and product safety.

The honor system is not a regulatory regime, said Jesse Ehrenfeld, who chairs the physician groups board of trustees.In an October letter to the FDA, Sens. Elizabeth Warren (D-Mass.), Tina Smith (D-Minn.) and Patty Murray (D-Wash.) questioned the agencys ability to ensure company safety reports are accurate, timely and based on all available information.

When Good Algorithms Go Bad

Some AI devices are more carefully tested than others.

An AI-powered screening tool for diabetic eye disease was studied in 900 patients at 10 primary care offices before being approved in 2018. The manufacturer, IDx Technologies, worked with the FDA for eight years to get the product right, said Michael Abramoff, the companys founder and executive chairman.

The test, sold as IDx-DR, screens patients for diabetic retinopathy, a leading cause of blindness, and refers high-risk patients to eye specialists, who make a definitive diagnosis.

IDx-DR is the first autonomous AI product one that can make a screening decision without a doctor. The company is now installing it in primary care clinics and grocery stores, where it can be operated by employees with a high school diploma. Abramoffs company has taken the unusual step of buying liability insurance to cover any patient injuries.

Yet some AI-based innovations intended to improve care have had the opposite effect.

A Canadian company, for example, developed AI software to predict a persons risk of Alzheimers based on their speech. Predictions were more accurate for some patients than others. Difficulty finding the right word may be due to unfamiliarity with English, rather than to cognitive impairment, said co-author Frank Rudzicz, an associate professor of computer science at the University of Toronto.

Doctors at New Yorks Mount Sinai Hospital hoped AI could help them use chest X-rays to predict which patients were at high risk of pneumonia. Although the system made accurate predictions from X-rays shot at Mount Sinai, the technology flopped when tested on images taken at other hospitals. Eventually, researchers realized the computer had merely learned to tell the difference between that hospitals portable chest X-rays taken at a patients bedside with those taken in the radiology department. Doctors tend to use portable chest X-rays for patients too sick to leave their room, so its not surprising that these patients had a greater risk of lung infection.

DeepMind, a company owned by Google, has created an AI-based mobile app that can predict which hospitalized patients will develop acute kidney failure up to 48 hours in advance. A blog post on the DeepMind website described the system, used at a London hospital, as a game changer. But the AI system also produced two false alarms for every correct result, according to a July study in Nature. That may explain why patients kidney function didnt improve, said Saurabh Jha, associate professor of radiology at the Hospital of the University of Pennsylvania. Any benefit from early detection of serious kidney problems may have been diluted by a high rate of overdiagnosis, in which the AI system flagged borderline kidney issues that didnt need treatment, Jha said. Google had no comment in response to Jhas conclusions.

False positives can harm patients by prompting doctors to order unnecessary tests or withhold recommended treatments, Jha said. For example, a doctor worried about a patients kidneys might stop prescribing ibuprofen a generally safe pain reliever that poses a small risk to kidney function in favor of an opioid, which carries a serious risk of addiction.

As these studies show, software with impressive results in a computer lab can founder when tested in real time, Stanfords Cho said. Thats because diseases are more complex and the health care system far more dysfunctional than many computer scientists anticipate.

Many AI developers cull electronic health records because they hold huge amounts of detailed data, Cho said. But those developers often arent aware that theyre building atop a deeply broken system. Electronic health records were developed for billing, not patient care, and are filled with mistakes or missing data.

A KHN investigation published in March found sometimes life-threatening errors in patients medication lists, lab tests and allergies.

In view of the risks involved, doctors need to step in to protect their patients interests, said Vikas Saini, a cardiologist and president of the nonprofit Lown Institute, which advocates for wider access to health care.

While it is the job of entrepreneurs to think big and take risks, Saini said, it is the job of doctors to protect their patients.

Kaiser Health News (KHN) is a nonprofit news service covering health issues. It is an editorially independent program of the Kaiser Family Foundation that is not affiliated with Kaiser Permanente.

Continued here:

Artificial Intelligence Is Rushing Into Patient Care - And Could Raise Risks - Scientific American

One key to artificial intelligence on the battlefield: trust – C4ISRNet

To understand how humans might better marshal autonomous forces during battle in the near future, it helps to first consider the nature of mission command in the past.

Derived from a Prussian school of battle, mission command is a form of decentralized command and control. Think about a commander who is given an objective and then trusted to meet that goal to the best of their ability and to do so without conferring with higher-ups before taking further action. It is a style of operating with its own advantages and hurdles, obstacles that map closely onto the autonomous battlefield.

At one level, mission command really is a management of trust, said Ben Jensen, a professor of strategic studies at the Marine Corps University. Jensen spoke as part of a panel on multidomain operations at the Association of the United States Army AI and Autonomy symposium in November. Were continually moving choice and agency from the individual because of optimized algorithms helping [decision-making]. Is this fundamentally irreconcilable with the concept of mission command?

The problem for military leaders then is two-fold: can humans trust the information and advice they receive from artificial intelligence? And, related, can those humans also trust that any autonomous machines they are directing are pursuing objectives the same way people would?

To the first point, Robert Brown, director of the Pentagons multidomain task force, emphasized that using AI tools means trusting commanders to act on that information in a timely manner.

A mission command is saying: youre going to provide your subordinates the depth, the best data, you can get them and youre going to need AI to get that quality data. But then thats balanced with their own ground and then the art of whats happening, Brown said. We have to be careful. You certainly can lose that speed and velocity of decision.

Before the tools ever get to the battlefield, before the algorithms are ever bent toward war, military leaders must ensure the tools as designed actually do what service members need.

How do we create the right type of decision aids that still empower people to make the call, but gives them the information content to move faster? said Tony Frazier, an executive at Maxar Technologies.

Know all the coolest acronyms Sign up for the C4ISRNET newsletter about future battlefield technologies.

Subscribe

Enter a valid email address (please select a country) United States United Kingdom Afghanistan Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, The Democratic Republic of The Cook Islands Costa Rica Cote D'ivoire Croatia Cuba Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guinea Guinea-bissau Guyana Haiti Heard Island and Mcdonald Islands Holy See (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Israel Italy Jamaica Japan Jordan Kazakhstan Kenya Kiribati Korea, Democratic People's Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, The Former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic of Monaco Mongolia Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands Netherlands Antilles New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestinian Territory, Occupied Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Reunion Romania Russian Federation Rwanda Saint Helena Saint Kitts and Nevis Saint Lucia Saint Pierre and Miquelon Saint Vincent and The Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia and Montenegro Seychelles Sierra Leone Singapore Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and The South Sandwich Islands Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand Timor-leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States United States Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela Viet Nam Virgin Islands, British Virgin Islands, U.S. Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe

Thanks for signing up!

By giving us your email, you are opting in to the C4ISRNET Daily Brief.

An intelligence product, using AI to provide analysis and information to combatants, will have to fall in the sweet spot of offering actionable intelligence, without bogging the recipient down in details or leaving them uninformed.

One thing thats remained consistent is folks will do one of three things with overwhelming information, Brown said. They will wait for perfect information. Theyll just wait wait, wait, theyll never have perfect information and adversaries [will have] done 10 other things, by the way. Or theyll be overwhelmed and disregard the information.

The third path users will take, Brown said, is the very task commanders want them to follow: find golden needles in eight stacks of information to help them make a decision in a timely manner.

Getting there, however, where information is empowering instead of paralyzing or disheartening, is the work of training. Adapting for the future means practicing in the future environment, and that means getting new practitioners familiar with the kinds of information they can expect on the battlefield.

Our adversaries are going to bring a lot of dilemmas our way and so our ability to comprehend those challenges and then hopefully not just react but proactively do something to prevent those actions, is absolutely critical, said Brig. Gen. David Kumashiro, the director of Joint Force Integration for the Air Force.

When a battle has thousands of kill chains, and analysis that stretches over hundreds of hours, humans have a difficult time comprehending what is happening. In the future, it will be the job of artificial intelligence to filter these threats. Meanwhile, it will be the role of the human in the loop to take that filtered information and respond as best it can to the threats arrayed against them.

What does it mean to articulate mission command in that environment, the understanding, the intent, and the trust? said Kumashiro, referring to the fast pace of AI filtering. When the highly contested environment disrupts those connections, when we are disconnected from the hive, those authorities need to be understood so that our war fighters at the farthest reaches of the tactical edge can still perform what they need to do.

Planning not just for how these AI tools work in ideal conditions, but how they will hold up under the degradation of a modern battlefield, is essential for making technology an aide, and not a hindrance, to the forces of the future.

If the data goes away, and you still got the mission, youve got to attend to it, said Brown. Thats a huge factor as well for practice. If youre relying only on the data, youll fail miserably in degraded mode.

See the article here:

One key to artificial intelligence on the battlefield: trust - C4ISRNet

China should step up regulation of artificial intelligence in finance, think tank says – Reuters

QINGDAO, China/BEIJING (Reuters) - China should introduce a regulatory framework for artificial intelligence in the finance industry, and enhance technology used by regulators to strengthen industry-wide supervision, policy advisers at a leading think tank said on Sunday.

FILE PHOTO: China Securities Regulatory Commission Chairman Xiao Gang addresses the Asian Financial Forum in Hong Kong January 19, 2015. REUTERS/Bobby Yip/File Photo

We should not deify artificial intelligence as it could go wrong just like any other technology, said the former chief of Chinas securities regulator, Xiao Gang, who is now a senior researcher at the China Finance 40 Forum.

The point is how we make sure it is safe for use and include it with proper supervision, Xiao told a forum in Qingdao on Chinas east coast.

Technology to regulate intelligent finance - referring to banking, securities and other financial products that employ technology such as facial recognition and big-data analysis to improve sales and investment returns - has largely lagged development, showed a report from the China Finance 40 Forum.

Evaluation of emerging technologies and industry-wide contingency plans should be fully considered, while authorities should draft laws and regulations on privacy protection and data security, the report showed.

Lessons should be learned from the boom and bust of the online peer-to-peer (P2P) lending sector where regulations were not introduced quickly enough, said economics professor Huang Yiping at the National School of Development of Peking University.

Chinas P2P industry was once widely seen as an important source of credit, but has lately been undermined by pyramid-scheme scandals and absent bosses, sparking public anger as well as a broader government crackdown.

Changes have to be made among policy makers, said Zhang Chenghui, chief of the finance research bureau at the Development Research Institute of the State Council.

We suggest regulation on intelligent finance to be written in to the 14th five-year plan of the countrys development, and each financial regulator - including the central bank, banking and insurance regulators and the securities watchdog - should appoint its own chief technology officer to enhance supervision of the sector.

Zhang also suggested the government brings together the data platforms of each financial regulatory body to better monitor potential risk and act quickly as problems arise.

Reporting by Cheng Leng in Qingdao, China, and Ryan Woo in Beijing; Editing by Christopher Cushing

Read more:

China should step up regulation of artificial intelligence in finance, think tank says - Reuters

In the 2020s, human-level A.I. will arrive, and finally ace the Turing test – Inverse

The past decade has seen the rise of remarkably human personal assistants, increasing automation in transportation and industrial environments, and even the alleged passing of Alan Turings famous robot consciousness test. Such innovations have taken artificial intelligence out labs and into our hands.

A.I. programs have become painters, drivers, doctors assistants, and even friends. But with these new benefits have also come increasing dangers. This ending decade saw the first, and likely not the last, death caused by a self-driving car.

This is #20 on Inverses 20 predictions for the 2020s.

And as we head toward another decade of machine learning and robotics research, questions surrounding the moral programming of A.I. and the limits of their autonomy will no longer be just thought-experiments but time-sensitive problem.

One such area to keep on eye on going forward into a new decade will be partially defined by this question: what kind of legal status will A.I. be granted as their capabilities and intelligence continues to scale closer to that of humans? This is a conversation the archipelago nation Malta started in 2018 when its leaders proposed that it should prepare to grant or deny citizenship to A.I.s just as they would humans.

The logic behind this being that A.I.s of the future could have just as much agency and potential to cause disruption as any other non-robotic being. Francois Piccione, policy advisor for the Maltese government, told Inverse in 2019 that not taking such measures would be irresponsible.

Artificial Intelligence is being seen in many quarters as the most transformative technology since the invention of electricity, said Piccione. To realize that such a revolution is taking place and not do ones best to prepare for it would be irresponsible.

While the 2020s might not see fully fledged citizenship for A.I.s, Inverse predicts that there will be increasing legal scrutiny in coming years over who is legally responsible over the actions of A.I., whether it be their owners or the companies designing them. Instead of citizenship or visas for A.I., this could lead to further restrictions on the humans who travel with them and the ways in which A.I. can be used in different settings.

Another critical point of increasing scrutiny in the coming years will be how to ensure A.I. programmers continue to think critically about the algorithms they design.

This past decade saw racism and death as the result of poorly designed algorithms and even poorer introspection. Inverse predicts that as A.I. continues to scale labs will increasingly call upon outside experts, such as ethicists and moral psychologists, to make sure these human-like machines are not doomed to repeat our same, dehumanizing, mistakes.

As 2019 draws to a close, Inverse is looking to the future. These are our 20 predictions for science and technology for the 2020s. Some are terrifying, some are fascinating, and others we can barely wait for. This has been #20. Read a related story here.

Go here to read the rest:

In the 2020s, human-level A.I. will arrive, and finally ace the Turing test - Inverse