Op Ed: Submission: The Stalinist Show Trial of Roger Stone – The Published Reporter

Roger Stone reacts outside his Fort Lauderdale, Fla., home after President Trump commuted his federal prison sentence. Photo credit: REUTERS/Joe Skipper.

QUEENS, NY Roger Stones trial was a Stone-cold case of abuse of justice and due process at a time when Americans are crying out for justice. Now, after his federal prison sentence was commuted by the president, thefeds are coming after him again. They never stop abusing justice to attack President Trump and his loyal Republican allies, in their failed attempts to undo what they couldnt do at the ballot box in 2016 and now in 2020, as voter enthusiasm mounts for our president.

Roger Stone, senior campaign advisor to Presidents Nixon, Reagan and Trump, gave us a taste of some Stone Cold Truth askeynote speaker at one of the past Lincoln Dinnersof the Queens Village Republican Club. It was his personal account of the political establishments attempt to remove President Trump after he was elected in 2016 in the largest case of political espionage that makes Watergate look like small potatoes.

Stone, the loyal defender of our president, was prosecuted as a victim of a political witch hunt. He was arrested and a trial ensued for his crimes of standing up for President Trump and not caving in to the threats of his inquisitors of theMueller investigationaimed at sending him to die in the Gulag. This was a Stalinist show trial.

How do they justify the pre-dawn raid with more than 20 FBI agents armed with automatic weapons, with CNN, the American Pravda, filming the spectacle on nationwide TV, to arrest Roger Stone as the #1 enemy of the State? They dont even send 20 FBI agents to arrest a murderer. Without evidence of a crime, presumed guilty before the trial, which was fixed, the prosecution, fixed, the court, the judges, the jury, the media all of which were fixed, in an effort to get Stone to turn on the president, to lie, in order to collect evidence for Muellers Russia collusion witch hunt. They will do anything to overturn the legitimate election of Trump.

The special court of Salem Massachusetts convened the infamous Salem witch trials, where you could be accused, prosecuted, and hanged for practicing witchcraft whether there was evidence or not. An accusation of being a witch was enough criminal evidence. Accusations of being a communist in the McCarthy era could ruin your reputation and send you to prison. People have no rights in high government commissions and are judged guilty before the trial begins.

Stones alleged crimes were prosecuted in a fake government court replete withliberal activist JudgeAmy Berman Jackson,Obama and Hillary Clinton operativesacting as prosecutors, and a partisan jury.It was a bogus government investigation of a political prisoner, in a kangaroo court that found Stone guilty before the trial without evidence of a crime, where he had to prove his innocence by squealing on Trump.

The show trialwas based on the false premises that WikiLeaks founder Julian Assange is a Russian asset, that Wikileaks is a Russian front organization, and Stone collaborated with WikiLeaks. This was the core of the failed narrative that the Trump campaign was in collusion with Russia to meddle in the 2016 presidential election,all of which is 100% rubbish. WikiLeaks is a big thorn in the side of our government by publishing deep secrets they dont want American voters to know. Its a key part of our governments ongoing political war against our president, who poses a great threat to the Washington deep state, the globalists, the lobbyists, the special interests, and the Democrat party itself. They dont care about America, they dont care about you, they dont care about our Constitution, they dont care about justice. All they want is to preserve business as usual to maintain their own power and control.

So they went after Stone to confess to his alleged crimes, and tell his inquisitors what they wanted to hear, that Trump put him up to this. Smear Trump and youre off the hook. But Roger Stone is an American Patriot, and he would not lie to save his own skin. So they punished him severely.They sentenced a 67-year-old with underlying respiratory problems, to rot in a federal prison with coronavirus outbreaks, for 40 months which would have been a certain death sentence.

Cop killer Steven Chirsewas recently granted early release from prison due to the coronavirus epidemic.Criminals are being released in drovesin crime-ridden Democrat run cities. Murderers are being released. But Stone, with no prior record, is treated worse than a murderer. They wanted to send him to prison to die, like a political prisoner in Stalins Russia. Like the Soviet court system, criminals are let out, and if you dont fit their narrative, they send you to the Gulag. Theyre all Stalinists the Mueller Special Counsel investigation, the judge, the courts, FBI, CIA, the media they got rid of the rule of law and people who dont fit their narrative, to give the state absolute power.

They wanted to destroy a human life to get to Trump. You wouldnt want your worst enemy to go to jail with Covid. But where is the uproar from the silent majority? Why arent more good people demanding justice for Roger Stone, as the DOJ inspector general is now re-investigating his sentencing? Why werent more people screaming that the punishment doesnt fit the crime? Where are all the social justice lawyers, hypocrites that they are? Paging Clarence Darrow who famously said: You can only protect your liberties in this world by protecting the other mans freedom?

Where are the good people of this country standing up for a person, whom they may not like, but still deserves equal justice, because it could be you next. Stone was framed and they sent him out to die. Stone received no justice, but the greatest threat is coming to all of us. You will not receive justice under the new Stalinist Democrat regime. We must all stand up for Roger Stone and true justice, and vote in the most important election of our lifetime.

See the original post:

Op Ed: Submission: The Stalinist Show Trial of Roger Stone - The Published Reporter

The Increasing Role of Artificial Intelligence in Health Care: Will Ro | IJGM – Dove Medical Press

Abdullah Shuaib1,, Husain Arian,1 Ali Shuaib2

1Department of General Surgery, Jahra Hospital, Jahra, Kuwait; 2Biomedical Engineering Unit, Department of Physiology, Faculty of Medicine, Kuwait University, Kuwait City, Kuwait

Dr Abdullah Shuaib passed away on July 21, 2020

Correspondence: Ali ShuaibBiomedical Engineering Unit, Department of Physiology, Faculty of Medicine, Kuwait University, Kuwait City, KuwaitTel +965 24636786Email ali.shuaib@ku.edu.kw

Abstract: Artificial intelligence (AI) pertains to the ability of computers or computer-controlled machines to perform activities that demand the cognitive function and performance level of the human brain. The use of AI in medicine and health care is growing rapidly, significantly impacting areas such as medical diagnostics, drug development, treatment personalization, supportive health services, genomics, and public health management. AI offers several advantages; however, its rampant rise in health care also raises concerns regarding legal liability, ethics, and data privacy. Technological singularity (TS) is a hypothetical future point in time when AI will surpass human intelligence. If it occurs, TS in health care would imply the replacement of human medical practitioners with AI-guided robots and peripheral systems. Considering the pace at which technological advances are taking place in the arena of AI, and the pace at which AI is being integrated with health care systems, it is not be unreasonable to believe that TS in health care might occur in the near future and that AI-enabled services will profoundly augment the capabilities of doctors, if not completely replace them. There is a need to understand the associated challenges so that we may better prepare the health care system and society to embrace such a change if it happens.

Keywords: artificial intelligence, technological singularity, health care system

This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution - Non Commercial (unported, v3.0) License.By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.

Read the original post:
The Increasing Role of Artificial Intelligence in Health Care: Will Ro | IJGM - Dove Medical Press

Artificial intelligence gets real in the OR – Modern Healthcare

Dr. Ahmed Ghazi, a urologist and director of the simulation innovation lab at the University of Rochester (N.Y.) Medical Center, once thought autonomous robotic surgery wasnt possible. He changed his mind after seeing a research group successfully complete a running suture on one of his labs tissue models with an autonomous robot.

It was surprisingly preciseand impressive, Ghazi said. But whats missing from the autonomous robot is the judgment, he said. Every single patient, when you look inside to do the same surgery, is very different. Ghazi suggested thinking about autonomous surgical procedures like an airplane on autopilot: the pilots still there. The future of autonomous surgery is there, but it has to be guided by the surgeon, he said.

Its also a matter of ensuring AI surgical systems are trained on high-quality and representative data, experts say. Before implementing any AI product, providers need to understand what data the program was trained on and what data it considers to make its decisions, said Dr. Andrew Furman, executive director of clinical excellence at ECRI. What data were input for the software or product to make a particular decision must also be weighed, and are those inputs comparable to other populations? he said.

To create a model capable of making surgical decisions, developers need to train it on thousands of previous surgical cases. That could be a long-term outcome of using AI to analyze video recordings of surgical procedures, said Dr. Tamir Wolf, co-founder and CEO of Theator, another company that does just that.

While the companys current product is designed to help surgeons prepare for a procedure and review their performance, its vision is to use insights from that data to underpin real-time decision support and, eventually, autonomous surgical systems.

UC San Diego Health is using a video-analysis tool developed by Digital Surgery, an AI and analytics company Medtronic acquired earlier this year. The acquisition is part of Medtronics strategy to bolster its AI capabilities, said Megan Rosengarten, vice president and general manager of surgical robotics at Medtronic.

Theres a lot of places where were going to build upon that, Rosengarten said. She described a likely evolution from AI providing recommendations for nonclinical workflows, to offering intra-operative clinical decision support, to automating aspects of nonclinical tasks, and possibly to automating aspects of clinical tasks.

Autonomous surgical robots arent a specific end goal Medtronic is aiming for, she said, though the companys current work could serve as building blocks for automation.

Intuitive Surgical, creator of the da Vinci system, isnt actively looking to develop autonomous robotic systems, according to Brian Miller, the companys senior vice president and general manager for systems, imaging and digital.Its AI products so far use the technology to create 3D visualizations from images and extract insights from how surgeons interact with the companys equipment.

To develop an automated robotic product, it would have to solve a real problem identified by customers, Miller said, which he hasnt seen. Were looking to augment what the surgeon or what the users can do, he said.

See original here:
Artificial intelligence gets real in the OR - Modern Healthcare

The Next Generation Of Artificial Intelligence – Forbes

AI legend Yann LeCun, one of the godfathers of deep learning, sees self-supervised learning as the ... [+] key to AI's future.

The field of artificial intelligence moves fast. It has only been 8 years since the modern era of deep learning began at the 2012 ImageNet competition. Progress in the field since then has been breathtaking and relentless.

If anything, this breakneck pace is only accelerating. Five years from now, the field of AI will look very different than it does today. Methods that are currently considered cutting-edge will have become outdated; methods that today are nascent or on the fringes will be mainstream.

What will the next generation of artificial intelligence look like? Which novel AI approaches will unlock currently unimaginable possibilities in technology and business? This article highlights three emerging areas within AI that are poised to redefine the fieldand societyin the years ahead. Study up now.

The dominant paradigm in the world of AI today is supervised learning. In supervised learning, AI models learn from datasets that humans have curated and labeled according to predefined categories. (The term supervised learning comes from the fact that human supervisors prepare the data in advance.)

While supervised learning has driven remarkable progress in AI over the past decade, from autonomous vehicles to voice assistants, it has serious limitations.

The process of manually labeling thousands or millions of data points can be enormously expensive and cumbersome. The fact that humans must label data by hand before machine learning models can ingest it has become a major bottleneck in AI.

At a deeper level, supervised learning represents a narrow and circumscribed form of learning. Rather than being able to explore and absorb all the latent information, relationships and implications in a given dataset, supervised algorithms orient only to the concepts and categories that researchers have identified ahead of time.

In contrast, unsupervised learning is an approach to AI in which algorithms learn from data without human-provided labels or guidance.

Many AI leaders see unsupervised learning as the next great frontier in artificial intelligence. In the words of AI legend Yann LeCun: The next AI revolution will not be supervised. UC Berkeley professor Jitenda Malik put it even more colorfully: Labels are the opium of the machine learning researcher.

How does unsupervised learning work? In a nutshell, the system learns about some parts of the world based on other parts of the world. By observing the behavior of, patterns among, and relationships between entitiesfor example, words in a text or people in a videothe system bootstraps an overall understanding of its environment. Some researchers sum this up with the phrase predicting everything from everything else.

Unsupervised learning more closely mirrors the way that humans learn about the world: through open-ended exploration and inference, without a need for the training wheels of supervised learning. One of its fundamental advantages is that there will always be far more unlabeled data than labeled data in the world (and the former is much easier to come by).

In the words of LeCun, who prefers the closely related term self-supervised learning: In self-supervised learning, a portion of the input is used as a supervisory signal to predict the remaining portion of the input....More knowledge about the structure of the world can be learned through self-supervised learning than from [other AI paradigms], because the data is unlimited and the amount of feedback provided by each example is huge.

Unsupervised learning is already having a transformative impact in natural language processing. NLP has seen incredible progress recently thanks to a new unsupervised learning architecture known as the Transformer, which originated at Google about three years ago. (See #3 below for more on Transformers.)

Efforts to apply unsupervised learning to other areas of AI remain at earlier stages, but rapid progress is being made. To take one example, a startup named Helm.ai is seeking to use unsupervised learning to leapfrog the leaders in the autonomous vehicle industry.

Many researchers see unsupervised learning as the key to developing human-level AI. According to LeCun, mastering unsupervised learning is the greatest challenge in ML and AI of the next few years.

One of the overarching challenges of the digital era is data privacy. Because data is the lifeblood of modern artificial intelligence, data privacy issues play a significant (and often limiting) role in AIs trajectory.

Privacy-preserving artificial intelligencemethods that enable AI models to learn from datasets without compromising their privacyis thus becoming an increasingly important pursuit. Perhaps the most promising approach to privacy-preserving AI is federated learning.

The concept of federated learning was first formulated by researchers at Google in early 2017. Over the past year, interest in federated learning has exploded: more than 1,000 research papers on federated learning were published in the first six months of 2020, compared to just 180 in all 2018.

The standard approach to building machine learning models today is to gather all the training data in one place, often in the cloud, and then to train the model on the data. But this approach is not practicable for much of the worlds data, which for privacy and security reasons cannot be moved to a central data repository. This makes it off-limits to traditional AI techniques.

Federated learning solves this problem by flipping the conventional approach to AI on its head.

Rather than requiring one unified dataset to train a model, federated learning leaves the data where it is, distributed across numerous devices and servers on the edge. Instead, many versions of the model are sent outone to each device with training dataand trained locally on each subset of data. The resulting model parameters, but not the training data itself, are then sent back to the cloud. When all these mini-models are aggregated, the result is one overall model that functions as if it had been trained on the entire dataset at once.

The original federated learning use case was to train AI models on personal data distributed across billions of mobile devices. As those researchers summarized: Modern mobile devices have access to a wealth of data suitable for machine learning models....However, this rich data is often privacy sensitive, large in quantity, or both, which may preclude logging to the data center....We advocate an alternative that leaves the training data distributed on the mobile devices, and learns a shared model by aggregating locally-computed updates.

More recently, healthcare has emerged as a particularly promising field for the application of federated learning.

It is easy to see why. On one hand, there are an enormous number of valuable AI use cases in healthcare. On the other hand, healthcare data, especially patients personally identifiable information, is extremely sensitive; a thicket of regulations like HIPAA restrict its use and movement. Federated learning could enable researchers to develop life-saving healthcare AI tools without ever moving sensitive health records from their source or exposing them to privacy breaches.

A host of startups has emerged to pursue federated learning in healthcare. The most established is Paris-based Owkin; earlier-stage players include Lynx.MD, Ferrum Health and Secure AI Labs.

Beyond healthcare, federated learning may one day play a central role in the development of any AI application that involves sensitive data: from financial services to autonomous vehicles, from government use cases to consumer products of all kinds. Paired with other privacy-preserving techniques like differential privacy and homomorphic encryption, federated learning may provide the key to unlocking AIs vast potential while mitigating the thorny challenge of data privacy.

The wave of data privacy legislation being enacted worldwide today (starting with GDPR and CCPA, with many similar laws coming soon) will only accelerate the need for these privacy-preserving techniques. Expect federated learning to become an important part of the AI technology stack in the years ahead.

We have entered a golden era for natural language processing.

OpenAIs release of GPT-3, the most powerful language model ever built, captivated the technology world this summer. It has set a new standard in NLP: it can write impressive poetry, generate functioning code, compose thoughtful business memos, write articles about itself, and so much more.

GPT-3 is just the latest (and largest) in a string of similarly architected NLP modelsGoogles BERT, OpenAIs GPT-2, Facebooks RoBERTa and othersthat are redefining what is possible in NLP.

The key technology breakthrough underlying this revolution in language AI is the Transformer.

Transformers were introduced in a landmark 2017 research paper. Previously, state-of-the-art NLP methods had all been based on recurrent neural networks (e.g., LSTMs). By definition, recurrent neural networks process data sequentiallythat is, one word at a time, in the order that the words appear.

Transformers great innovation is to make language processing parallelized: all the tokens in a given body of text are analyzed at the same time rather than in sequence. In order to support this parallelization, Transformers rely heavily on an AI mechanism known as attention. Attention enables a model to consider the relationships between words regardless of how far apart they are and to determine which words and phrases in a passage are most important to pay attention to.

Why is parallelization so valuable? Because it makes Transformers vastly more computationally efficient than RNNs, meaning they can be trained on much larger datasets. GPT-3 was trained on roughly 500 billion words and consists of 175 billion parameters, dwarfing any RNN in existence.

Transformers have been associated almost exclusively with NLP to date, thanks to the success of models like GPT-3. But just this month, a groundbreaking new paper was released that successfully applies Transformers to computer vision. Many AI researchers believe this work could presage a new era in computer vision. (As well-known ML researcher Oriol Vinyals put it simply, My take is: farewell convolutions.)

While leading AI companies like Google and Facebook have begun to put Transformer-based models into production, most organizations remain in the early stages of productizing and commercializing this technology. OpenAI has announced plans to make GPT-3 commercially accessible via API, which could seed an entire ecosystem of startups building applications on top of it.

Expect Transformers to serve as the foundation for a whole new generation of AI capabilities in the years ahead, starting with natural language. As exciting as the past decade has been in the field of artificial intelligence, it may prove to be just a prelude to the decade ahead.

Read the rest here:
The Next Generation Of Artificial Intelligence - Forbes

Artificial Intelligence Cold War on the horizon – POLITICO

While the U.S. has lacked central organizing of its AI, it has an advantage in its flexible tech industry, said Nand Mulchandani, the acting director of the U.S. Department of Defense Joint Artificial Intelligence Center. Mulchandani is skeptical of Chinas efforts at civil-military fusion, saying that governments are rarely able to direct early stage technology development.

Tensions over how to accelerate AI are driven by the prospect of a tech cold war between the U.S. and China, amid improving Chinese innovation and access to both capital and top foreign researchers. Theyve learned by studying our playbook, said Elsa B. Kania of the Center for a New American Security.

Many commentators in Washington and Beijing have accepted the fact that we are in a new type of Cold War, said Ulrik Vestergaard Knudsen, deputy secretary general of Organization for Economic Cooperation and Development (OECD), which is leading efforts to develop global AI cooperation. But he argued that we should not abandon hope of joining forces globally. Leading democracies want to keep the door open: Ami Appelbaum, chairman of Israels innovation authority, said we have to work globally and we have to work jointly. I wish also the Chinese and the Russians would join us. Eric Schmidt said coalitions and cooperation would be needed, but to beat China rather than to include them. "China is simply too big," he said. "There are too many smart people for us to do this on our own."

The invasive nature and the scale of many AI technologies mean that companies could be hindered in growing civilian markets, and the public could be skeptical of national security efforts, in the absence of clear frameworks for protecting privacy and other rights at home and abroad.

A Global Partnership on AI (GPAI), started by leaders of the Group of Seven (G7) countries and now managed by the OECD, has grown to include 13 countries including India. The U.S. is coordinating an AI Partnership for Defense, also among 13 democracies, while the OECD published a set of AI Principles in 2019 supported by 43 governments.

Knudsen said that it is important for AI global cooperation to move cautiously. Multilateralism and international cooperation are under strain, he said, making a global agreement on AI ethics difficult. But if you start with soft law, if you start with principles and let civil society and academics join the discussion, it is actually possible to reach consensus, he said.

Data and cultural dividing lines

Major divisions exist over how to handle data generated by AI processes. In Europe, we say that its the individual that owns the data. In China, its the state or the party. And then theres a divide in the rest of the world, said Knudsen. There is a right to privacy that accrues to everyone, according to Courtney Bowman, director of privacy and civil liberties engineering at data-mining and surveillance company Palantir Technologies. But we have to recognize that privacy does have a cultural dimension. There are different flavors, he said.

Most experts agree there is the scope to regulate how data is used in AI. Palantirs Bowman says that AI success isnt about unhindered access to the biggest datasets. To build competent, capable AI its not just a matter of pure data accumulation, of volume. It comes down to responsible practices that actually align very closely with good data science, he said.

The countries that get the best data sets will develop the best AI: no doubt about it, said Nand Mulchandani. But he said that partnerships are the way to get that data. Global partnerships are so incredibly important because they give access to global data, which in aggregate is better than even a huge dataset from within a single country such as China.

How can government boost AI?

Rep. Cathy McMorris Rodgers (R - WA) , a leading Republican voice on technology issues, wants the U.S. government to create a foundation for trust in domestic AI via measures such as a national privacy standard. We need to be putting some protections in place that are pro-consumer so that there will be trust, in pro-American technology, she said.

U.S. Rep. Pramila Jayapal (R-Wa.) wants both government regulation and private sector standards while AI technologies particularly facial recognition are still young. The thing about technology is, once it's out of the bottle, it's out of the bottle, she said. You can't really bring back the rights of [Michigan resident Robert Williams who was arrested based on a faulty ID by facial recognition software], or the rights of Uighurs in China, who are bearing the brunt of this discriminatory use of facial recognition technology. Some experts argue that while regulation is needed, it must be sector-specific, because AI is not a single concept, but a family of technologies, with each requiring a different regulatory approach.

Government has a role in making data widely available for the development of AI, so that smaller companies have a fair opportunity to research and innovate, said Charles Romine, Director of the Information Technology Laboratory (ITL) within the National Institute of Standards and Technology (NIST).

On the question of government AI funding, Elsa Kania said that its not possible to make direct comparisons between U.S. and Chinese government investments. The U.S. has more venture capital, for example, while eye-popping investment figures from Chinas central government dont mean an awful lot if they arent matched by investments in talent and education, she said. We shouldnt be trying to match China dollar-for-dollar if we can be investing smarter.

Link:
Artificial Intelligence Cold War on the horizon - POLITICO

U.S. government agencies to use artificial intelligence to identify and eliminate outdated regulations – STLtoday.com

The General Services Administration will assist agencies in identifying technology partners and facilitate contracts.

The Trump administration had made deregulation a key priority, while critics say the administration has failed to ensure adequate regulatory safeguards.

WASHINGTON The White House Office of Management and Budget said Friday that federal agencies will use artificial intelligence to eliminate outdated, obsolete, and inconsistent requirements across tens of thousands of pages of government regulations. A 2019 pilot project used machine learning algorithms and natural language processing at the Department of Health and Human Services. The test run found hundreds of technical errors and outdated requirements in agency rulebooks, including requests to submit materials by fax.

OMB said all federal agencies are being encouraged to update regulations using AI and several agencies have already agreed to do so.

Over the last four years, the number of pages in the Code of Federal Regulations has remained at about 185,000.

White House OMB director Russell Vought said the AI effort would help agencies update a regulatory code marked by decades of neglect and lack of reform.

Under the initiative agencies will use AI technology and other software to comb through thousands and thousands of regulatory code pages to look for places where code can be updated, reconciled, and general scrubbed of technical mistakes, the White House said.

Participating agencies include the Transportation Department, the Agriculture Department, the Labor Department and the Interior Department.

The General Services Administration will assist agencies in identifying technology partners and facilitate contracts.

The Trump administration had made deregulation a key priority, while critics say the administration has failed to ensure adequate regulatory safeguards.

Read the original post:
U.S. government agencies to use artificial intelligence to identify and eliminate outdated regulations - STLtoday.com

Artificial intelligence, humanity, and the future – Sault Ste. Marie Evening News

By Shayne Looper

In September, the British news website "The Guardian" published a story written entirely by an AI an artificial intelligence that "learned" how to write from scanning the Internet. The piece received a lot of press because in it the AI stated it had no plans to destroy humanity. It did, however, admit that it could be programmed in a way that might prove destructive.

The AI is not beyond making mistakes. I noted its erroneous claim that the word "robot" derives from Greek. An AI that is mistaken about where a word comes from might also be mistaken about where humanity is headed. Or it might be lying. Not a pleasant thought.

Artificial Intelligence is based on the idea that computer programs can "learn" and "grow." No less an authority than Stephen Hawking has warned that AI, unbounded by the slow pace of biological development, might quickly supersede its human developers.

Other scientists are more optimistic, believing that AI may provide solutions to many of humanitys age-old problems, including disease and famine. Of course, the destruction of biological life would be one solution to disease and famine.

Hawking worried that a "growing" and "learning" computer program might eventually destroy the world. I doubt it ever occurred to Hawking that his fears regarding AI could once have been expressed toward BI biological intelligence; that is, humans at their creation.

Did non-human life forms, like those the Bible refers to as "angels," foresee the dangerous possibilities presented by the human capacity to "grow" and "learn"? Might not the angel Gabriel, like the scientist Hawking, have warned of impending doom?

AI designers are not blazing a trail but following one blazed by God himself. For example, their creations are made, as was Gods, in their own image. And, like Gods creation, theirs is designed to transcend its original specs. There is, however, this difference: AI designers do not know how to introduce a will into their creations.

The capacity for growth, designed into humankind from the first, is seldom given the consideration it deserves. For one thing, it implies the Creators enormous self-confidence. God, unlike humans, is not threatened by the growth of his creation. In fact, he delights in it. He does not need to worry about protecting himself.

That the Creator wants his creatures to grow is good news, for it means God is a parent. That is what parents are like. They long for their children to become great and good. No wonder Jesus taught his followers to call God "Father."

Given that God created such beings knowing what could and if theologians are correct, what would go wrong, he must have considered the outcome of creation to be so magnificent and good as to merit present pain and suffering. When people fault God for current evil, they do so without comprehending future good.

The present only makes sense in the light of the future, and the future only offers hope if we will become more and better than we currently are. Outside of the context of a magnificent future, present injustices, sorrows, and suffering appear overwhelming.

The hope presented in the Bible is audacious. It is unparalleled and unrivaled. The Marxist hopes for a better world. The Christian hopes for a perfect one: a new heaven and new earth, where everything is right and everyone exists in glory. The hope of the most enthusiastic Marxist fades before this shining hope the way a candle fades before the noonday sun.

This hope is not just that human pains will be forgotten, swallowed up in bliss. It is not just that shame will be buried when we die and left in the grave when we rise. Christian hope is not just that evil and injustice will be destroyed. It is that when God is all and is in all, we will be more than we have ever been.

The long story of weapons and wars, of marriages broken, and innocence stolen turns out to be different than we thought and better than we dreamed. It is the introduction to a story of astounding goodness, displayed in our creation, redemption, and glorious future.

Shayne Looper is the pastor of Lockwood Community Church in Branch County. Read more at shaynelooper.com.

View original post here:
Artificial intelligence, humanity, and the future - Sault Ste. Marie Evening News

IoT trends continue to push processing to the edge for artificial intelligence (AI) – Urgent Communications

As connected devices proliferate, new ways of processing have come to the fore to accommodate device and data explosion.

For years, organizations have moved toward centralized, off-site processing architecture in the cloud and away from on-premises data centers. Cloud computing enabled startups to innovate and expand their businesses without requiring huge capital outlays on data center infrastructure or ongoing costs for IT management. It enabled large organizations to scale quickly and stay agile by using on-demand resources.

But as enterprises move toward more remote models, video-intensive communications and other processes, they need an edge computing architecture to accommodate data-hogging tasks.

These data-intensive processes need to happen within fractions of a second: Think self-driving cars, video streaming or tracking shipping trucks in real time on their route. Sending data on a round trip to the cloud and back to the device takes too much time. It can also add cost and compromise data in transit.

Customers realize they dont want to pass a lot of processing up to the cloud, so theyre thinking the edge is the real target, according to Markus Levy, head of AI technologies at NXP Semiconductors, in a piece on therise of embedded AI.

In recent years, edge computing architecture has moved to the fore, to accommodate the proliferation of data and devices as well as the velocity at which this data is moving.

To read the complete article, visit IoT World Today.

Continue reading here:
IoT trends continue to push processing to the edge for artificial intelligence (AI) - Urgent Communications

Looper column: Artificial intelligence, humanity and the future – SouthCoastToday.com

Columns share an authors personal perspective.

*****

In September, the British news website The Guardian published a story written entirely by an AI - an artificial intelligence - that learned how to write from scanning the internet. The piece received a lot of press because in it the AI stated it had no plans to destroy humanity. It did, however, admit that it could be programmed in a way that might prove destructive.

The AI is not beyond making mistakes. I noted its erroneous claim that the word robot derives from Greek. An AI that is mistaken about where a word comes from might also be mistaken about where humanity is headed. Or it might be lying. Not a pleasant thought.

Artificial intelligence is based on the idea that computer programs can learn and grow. No less an authority than Stephen Hawking has warned that AI, unbounded by the slow pace of biological development, might quickly supersede its human developers.

Other scientists are more optimistic, believing that AI may provide solutions to many of humanitys age-old problems, including disease and famine. Of course, the destruction of biological life would be one solution to disease and famine.

Hawking worried that a growing and learning computer program might eventually destroy the world. I doubt it ever occurred to Hawking that his fears regarding AI could once have been expressed toward BI - biological intelligence; that is, humans - at their creation.

Did nonhuman life forms, like those the Bible refers to as angels, foresee the dangerous possibilities presented by the human capacity to grow and learn? Might not the angel Gabriel, like the scientist Hawking, have warned of impending doom?

AI designers are not blazing a trail but following one blazed by God himself. For example, their creations are made, as was Gods, in their own image. And, like Gods creation, theirs is designed to transcend its original specs. There is, however, this difference: AI designers do not know how to introduce a will into their creations.

The capacity for growth, designed into humankind from the first, is seldom given the consideration it deserves. For one thing, it implies the Creators enormous self-confidence. God, unlike humans, is not threatened by the growth of his creation. In fact, he delights in it. He does not need to worry about protecting himself.

That the Creator wants his creatures to grow is good news, for it means God is a parent. That is what parents are like. They long for their children to become great and good. No wonder Jesus taught his followers to call God Father.

Given that God created such beings knowing what could - and if theologians are correct, what would - go wrong, he must have considered the outcome of creation to be so magnificent and good as to merit present pain and suffering. When people fault God for current evil, they do so without comprehending future good.

The present only makes sense in the light of the future, and the future only offers hope if we will become more and better than we currently are. Outside of the context of a magnificent future, present injustices, sorrows and suffering appear overwhelming.

The hope presented in the Bible is audacious. It is unparalleled and unrivaled. The Marxist hopes for a better world. The Christian hopes for a perfect one: a new heaven and new earth, where everything is right and everyone exists in glory. The hope of the most enthusiastic Marxist fades before this shining hope the way a candle fades before the noonday sun.

This hope is not just that human pains will be forgotten, swallowed up in bliss. It is not just that shame will be buried when we die and left in the grave when we rise. Christian hope is not just that evil and injustice will be destroyed. It is that when God is all and is in all, we will be more than we have ever been.

The long story of weapons and wars, of marriages broken, and innocence stolen turns out to be different than we thought and better than we dreamed. It is the introduction to a story of astounding goodness, displayed in our creation, redemption, and glorious future.

Shayne Looper is the pastor of Lockwood Community Church in Coldwater, Michigan. His blog, The Way Home, is at shaynelooper.com.

Read more from the original source:
Looper column: Artificial intelligence, humanity and the future - SouthCoastToday.com

Boll Turns to Artificial Intelligence to Develop the Most Advanced High Contrast Lens Ever Introduced – SNEWS

LYON, FRANCE (October 15, 2020) Boll is launching the most technologically advanced high contrast lens in the marketplace with the introduction of Volt +, the industrys first lens ever developed using Artificial Intelligence.

The goal in developing the Volt + was to create a lens that provided high contrast and enhanced all colors to improve depth perception without compromising white balance. In the past, high contrast lenses enhanced one color while diminishing other colors. The Volt + enhances all colors, offering the most complete high contrast lens that improves depth perception.

Using AI, Boll was able to evaluate 20 million different lens formula combinations to settle on an incomparable color experience that sets a new standard against which all other lenses will be measured.

The Volt + lens is the latest innovation to come out EPIC, Bolls new state-of-the-art design and technology innovation lab based in Lyon, France. The lens technology will be included throughout Bolles line of sport specific sunglasses for the Spring 2021 season.

Weve set high standards to be the innovation and technology leader in the development and creation of sports performance eyewear and helmets, said Tove Fritzell, Boll Director of Product & Innovation Our EPIC design center located at the foot of the Alps continues to deliver amazing results to harness the most advanced technology with the needs of athletes who have an opportunity to sample and provide feedback at the foot of the worlds biggest playground.

In developing Volt +, Boll used AI to find out which wavelengths to enhance or dampen, to design and develop the chemical compound (pigments) that will absorb the right wavelengths and to then put together the perfect blend of pigments for the transition curve.

Boll is a leader in sport and lifestyle sunglasses, cycling helmets, ski goggles, and ski helmets. For more information, visit http://www.Boll.com. Boll is part of Boll Brands which encompasses the brands Boll, Boll Safety, Cb, Serengeti, Spy and H2Optics. Thanks to the complementary know-hows and innovative technologies developed by the six brands in their respective fields of activities, Boll Brands expertise covers a large spectrum of products that meet the highest requirements in terms of protection, performance, innovation and style.

See the article here:
Boll Turns to Artificial Intelligence to Develop the Most Advanced High Contrast Lens Ever Introduced - SNEWS