An Open Source Effort to Encrypt the Internet of Things – WIRED

End-to-end encryption is a staple of secure messaging apps like WhatsApp and Signal. It ensures that no oneeven the app developercan access your data as it traverses the web. But what if you could bring some version of that protection to increasingly ubiquitousand notoriously insecureinternet-of-things devices?

The Swiss cryptography firm Teserakt is trying just that. Earlier this month, at the Real World Crypto conference in New York, it introduced E4, a sort of cryptographic implant that IoT manufacturers can integrate into their servers. Today most IoT data is encrypted at some point as it moves across the web, but it's challenging to keep that protection consistent for the whole ride. E4 would do most of that work behind the scenes, so that whether companies make home routers, industrial control sensors, or webcams, all the data transmitted between the devices and their manufacturers can be encrypted.

Tech companies already rely on web encryption to keep IoT data secure, so it's not like your big-name fitness tracker is transmitting your health data with no protection. But E4 aims to provide a more comprehensive, open-source approach that's tailored to the realities of IoT. Carmakers managing dozens of models and hundreds of thousands of vehicles, or an energy company that takes readings from a massive fleet of smart meters, could have more assurance that full encryption protections really extend to every digital layer that data will cross.

"What we have now is a whole lot of different devices in different industries sending and receiving data," says Jean-Philippe Aumasson, Teserakt's CEO. "That data might be software updates, telemetry data, user data, personal data. So it should be protected between the device that produces it and the device that receives it, but technically it's very hard when you don't have the tools. So we wanted to build something that was easy for manufacturers to integrate at the software level."

Being open source is also what gives the Signal Protocol, which underpins Signal and WhatsApp, so much credibility. It means experts can check under the hood for vulnerabilities and flaws. And it enables any developer to adopt the protocol in their product, rather than attempting the fraught and risky task of developing encryption protections from scratch.

"At the end of the day we know that's the right thing to do."

Jean-Philippe Aumasson, Teserakt

Aumasson says that the Signal Protocol itself doesn't literally translate to IoT, which makes sense. Messaging apps involve remote but still direct, human-to-human interaction, whereas populations of embedded devices send data back to a manufacturer or vice versa. IoT needs a scheme that accounts for these "many-to-one" and "one-to-many" data flows. And end-to-end encryption has different privacy goals when it is applied to IoT versus secure messaging. Encrypted chat apps essentially aim to lock the developer, internet service providers, nation state spies, and any other snoops out. But in the IoT context, manufacturers still have access to their customers' data; the goal instead is to protect the data from other entities and Teserakt itself.

It also only hardens IoT defenses against a specific type of problem. E4 looks to improve defenses for information in transit and offer protection against data interception and manipulation. But just like encrypted chat services can't protect your messages if bad actors have access to your smartphone itself, E4 doesn't protect against a company's servers being compromised or improve security on IoT devices themselves.

"I think it's a good idea, but developers would need to keep in mind that it covers only one part of data protection," says Jatin Kataria, principal scientist at the IoT security firm Red Balloon. "Whats the security architecture of the embedded device itself and the servers that are receiving this data? If those two endpoints are not that secure then end-to-end encryption will only get you so far."

Teserakt has been consulting with big tech companies in aerospace, health care, agriculture, and the automotive and energy sectors to develop E4 and plans to monetize the tool by charging companies to customize implementations for their specific infrastructure. The company has not yet open-sourced full server code for E4 alongside the protocol details and cryptography documentation it released, but says that final step will come as soon as the documentation is complete. Given the glacial pace of investment in IoT security overall, you probably shouldn't expect E4 to be protecting the whole industry anytime soon, anyway.

Link:
An Open Source Effort to Encrypt the Internet of Things - WIRED

Global IoT Security Solution for Encryption Market 2020 Industry Growth, Competitive Analysis, Future Prospects and Forecast 2025 Dagoretti News -…

The research report on Global IoT Security Solution for Encryption Market offers the regional as well as global market information which is estimated to collect lucrative valuation over the forecast period. The Global IoT Security Solution for Encryption Market report also comprises the registered growth of Global IoT Security Solution for Encryption Market over the anticipated timeline and also covers a significant analysis of this space. Additionally, the Global IoT Security Solution for Encryption Market report focuses on the number of different crucial aspects to the remuneration recently which are held by the industry. Moreover, the Global IoT Security Solution for Encryption Market report analyzes the market segmentation as well as the huge number of lucrative opportunities offered across the industry.

Request a sample of this report @ https://www.orbismarketreports.com/sample-request/61552

According to the Global IoT Security Solution for Encryption Market report, the multi-featured product offerings may have a high positive influence on the Global IoT Security Solution for Encryption Market and it contributes to the market growth substantially during the prediction period. The Global IoT Security Solution for Encryption Market research report also covers many other significant market trends and crucial market drivers which will impact on the market growth over the forecast period.

This study covers following key players:Cisco SystemsIntel CorporationIBM CorporationSymantec CorporationTrend MicroDigicertInfineon TechnologiesARM HoldingsGemalto NVKaspersky LabCheckPoint Software TechnologiesSophos PlcAdvantechVerizon Enterprise SolutionsTrustwaveINSIDE Secure SAPTC Inc.AT&T Inc.

The Global IoT Security Solution for Encryption Market report includes substantial information related to the market driving forces which are highly influencing the vendor portfolio of the Global IoT Security Solution for Encryption Market and its impact on the market share in terms of revenue of this industry. Likewise, the Global IoT Security Solution for Encryption Market report analyzes all the current market trends by classifying them in a group of challenges as well as opportunities that the Global IoT Security Solution for Encryption Market will present into the coming years.

In addition, the shift in customer focus towards alternate products may restrict the demand for the Global IoT Security Solution for Encryption Market among consumers. Hence, such factors are responsible for hindering the growth of the Global IoT Security Solution for Encryption Market. Furthermore, the Global IoT Security Solution for Encryption Market is highly concentrated as the few leading players present in the market. However, major players in this market are continually concentrating on innovative or multi-featured solutions which will offer huge benefits for their business.

The Global IoT Security Solution for Encryption Market research report focuses on the manufacturers data such as price, gross profit, shipment, business distribution, revenue, interview record, etc., such information will help the users to know about the major players of competitor better. In addition, the Global IoT Security Solution for Encryption Market report also focuses on the countries and regions of the globe, which presents a regional status of the market including volume and value, market size, and price structure.

Market segment by Type, the product can be split intoSoftware PlatformsService

Access Complete Report @ https://www.orbismarketreports.com/global-iot-security-solution-for-encryption-market-size-status-and-forecast-2019-2025

Additionally, the Global IoT Security Solution for Encryption Market report will assist the client to recognize fresh and lucrative growth opportunities and build unique growth strategies through a complete analysis of the Global IoT Security Solution for Encryption Market and its competitive landscape and product offering information provided by the various companies. The Global IoT Security Solution for Encryption Market research report is prepared to offer the global as well as local market landscape and the number of guidelines related to the contemporary market size, market trends, share, registered growth, driving factors, and the number of dominant competitors of the Global IoT Security Solution for Encryption Market.

The Global IoT Security Solution for Encryption Market report covers all the significant information about market manufacturers, traders, distributors, and dealers. However, this information helps clients to know the product scope, market driving force, market overview, market risk, technological advancements, market opportunities, challenges, research findings, and key competitors. In addition, the Global IoT Security Solution for Encryption Market report will offer an in-depth analysis of the upstream raw material as well as downstream demand of the Global IoT Security Solution for Encryption Market.

Market segment by Application, split intoHealthcareInformation Technology (IT)TelecomBankingFinancial Services, And Insurance (BFSI)AutomotiveOthers

For Enquiry before buying report @ https://www.orbismarketreports.com/enquiry-before-buying/61552

Some TOC Points:

1 Report Overview2 Global Growth Trends3 Market Share by Key Players4 Breakdown Data by Type and Application

Continued

About Us:

With unfailing market gauging skills, Orbis Market Reports has been excelling in curating tailored business intelligence data across industry verticals. Constantly thriving to expand our skill development, our strength lies in dedicated intellectuals with dynamic problem solving intent, ever willing to mold boundaries to scale heights in market interpretation.

Contact Us:Hector CostelloSenior Manager Client Engagements4144N Central Expressway,Suite 600, Dallas,Texas 75204, U.S.A.Phone No.: USA: +1 (972)-362-8199 | IND: +91 895 659 5155

Read the original post:
Global IoT Security Solution for Encryption Market 2020 Industry Growth, Competitive Analysis, Future Prospects and Forecast 2025 Dagoretti News -...

Global Encryption Software Market is slated to grow rapidly in the coming years: Dell, Eset, Gemalto, IBM, Mcafee – Galus Australis

Encryption Software Market Industry Forecast To 2026

Encryption software is segmented on the basis of components (solution and services), applications, deployment types, organization sizes, verticals, and regions. The services segment is expected to grow at the highest CAGR during the forecast period and the solution segment is estimated to have the largest market size in 2017 in the market. Professional services have been widely adopted by organizations, as these services involve expert consulting, support and maintenance, and optimization and training for cybersecurity. However, the managed services segment is expected to grow at the highest CAGR during the forecast period, as managed security vendors provide extensive reporting capabilities for validating the regulatory compliance with internal security policies for the users.The disk encryption application is estimated to hold the largest market share in 2017. The importance of encrypting a disk is that, if the encrypted disk is lost or stolen, the encrypted state of the drive remains unchanged, and only an authorized user will be able to access its contents. The cloud encryption application is expected to grow at the fastest rate during the forecast period.

This Research report comes up with the size of the global Encryption Software Market for the base year 2020 and the forecast between 2020 and 2026.

Major Manufacturer Detail:Dell, Eset, Gemalto, IBM, Mcafee, Microsoft, Pkware, Sophos, Symantec, Thales E-Security, Trend Micro, Cryptomathic, Stormshield

Geta PDF SampleCopy (including TOC, Tables, and Figures) @https://garnerinsights.com/Global-Encryption-Software-Market-Size-Status-and-Forecast-2019-2025#request-sample

Types of Encryption Software covered are:On-premises, Cloud

Applications of Encryption Software covered are:Disk encryption, File/folder encryption, Database encryption, Communication encryption, Cloud encryption

The Global Encryption Software Market is studied on the basis of pricing, dynamics of demand and supply, total volume produced, and the revenue generated by the products. The manufacturing is studied with regards to various contributors such as manufacturing plant distribution, industry production, capacity, research and development. It also provides market evaluations including SWOT analysis, investments, return analysis, and growth trend analysis.

To get this report at a profitable rate, Click Herehttps://garnerinsights.com/Global-Encryption-Software-Market-Size-Status-and-Forecast-2019-2025#discount

Regional Analysis For Encryption SoftwareMarket

North America(the United States, Canada, and Mexico)Europe(Germany, France, UK, Russia, and Italy)Asia-Pacific(China, Japan, Korea, India, and Southeast Asia)South America(Brazil, Argentina, Colombia, etc.)The Middle East and Africa(Saudi Arabia, UAE, Egypt, Nigeria, and South Africa)

Get Full Report Description, TOC, Table of Figures, Chart, etc. @https://garnerinsights.com/Global-Encryption-Software-Market-Size-Status-and-Forecast-2019-2025

What does this report deliver?

Reasons to buy:

Get Full Report @ https://garnerinsights.com/Global-Encryption-Software-Market-Size-Status-and-Forecast-2019-2025

In conclusion, the Encryption Software Market report is a reliable source for accessing the Market data that will exponentially accelerate your business. The report provides the principle locale, economic scenarios with the item value, benefit, supply, limit, generation, request, Market development rate, and figure and so on. Besides, the report presents a new task SWOT analysis, speculation attainability investigation, and venture return investigation.

Contact Us:Mr. Kevin Thomas+1 513 549 5911 (US)+44 203 318 2846 (UK)Email: sales@garnerinsights.com

Continued here:
Global Encryption Software Market is slated to grow rapidly in the coming years: Dell, Eset, Gemalto, IBM, Mcafee - Galus Australis

D-Link Expands Family Of Cameras With 128-bit Encryption – ChannelNews

D-Link has expanded its family of mydlink cameras with the company announcing new devices featuring built-in Bluetooth for quick setup, remote monitoring and 128-bit encryption the latest industry standard.

Announced at CES 2020, the new cameras include the DCS-8630LH and DCS-8627LH Full HD Outdoor Wi-Fi Spotlight Cameras, the DCS-8526LH Full HD Pan/Tilt Pro Wi-Fi Camera, the DCS-8302LH Full HD Indoor/Outdoor Wi-Fi Camera and the DCS-8000LHV2 Mini Full HD Wi-Fi Camera.

Availability and pricing for Australia will be announced later this year.

Each of the new cameras features 128-bit wireless encryption, allowing users to create a 26 character HEX key for superior digital protection.

In addition, edge-based person detection has also been added, meaning these cameras wont miss a thing.

The IP65 waterproof DCS-8630LH and DCS-8627LH Full HD Outdoor Wi-Fi Spotlight Cameras sport a 400 lumen LED spotlight, with colour and infra-red night vision.

A Zigbee smart home hub has also been included inside the DCS-8630LH, making the device compatible with a range of smart assistants like Alexa and Google.

The DCS-8526LH Full HD Pan/Tilt Pro Wi-Fi Camera is auto motion tracking camera with a 360 horizontal view, 340 of pan and 105 of tilt.

The indoor/outdoor DCS-8302LH also features the same edge-based detection feature, with a splash-proof design and a built-in microphone and speaker with siren for real-time alerts.

Finally, the DCS-8000LHV2 Mini Full HD Wi-Fi Camera features a compact design (3.7 x 4.5 x 9.5 cm), capable of recording Full HD video at 30fps with a 138 FOV.

Read more:
D-Link Expands Family Of Cameras With 128-bit Encryption - ChannelNews

How Artificial Intelligence Will Make Decisions In Tomorrows Wars – Forbes

A US-built drone piloted by artificial intelligence. (Photo by Cristina Young/U.S. Navy via Getty ... [+] Images)

Artificial intelligence isn't only a consumer and business-centric technology. Yes, companies use AI to automate various tasks, while consumers use AI to make their daily routines easier. But governmentsand in particular militariesalso have a massive interest in the speed and scale offered by AI. Nation states are already using artificial intelligence to monitor their own citizens, and as the UK's Ministry of Defence (MoD) revealed last week, they'll also be using AI to make decisions related to national security and warfare.

The MoD's Defence and Security Accelerator (DASA) has announced the initial injection of 4 million in funding for new projects and startups exploring how to use AI in the context of the British Navy. In particular, the DASA is looking to support AI- and machine learning-based technology that will "revolutionise the way warships make decisions and process thousands of strands of intelligence and data."

In this first wave of funding, the MoD will share 1 million between nine projects as part of DASAs Intelligent Ship The Next Generation competition. However, while the first developmental forays will be made in the context of the navy, the UK government intends any breakthroughs to form the basis of technology that will be used across the entire spectrum of British defensive and offensive capabilities.

"The astonishing pace at which global threats are evolving requires new approaches and fresh-thinking to the way we develop our ideas and technology," said UK Defence Minister James Heappey. "The funding will research pioneering projects into how A.I and automation can support our armed forces in their essential day-to-day work."

More specifically, the project will be looking at how four conceptsautomation, autonomy, machine learning, and AIcan be integrated into UK military systems and how they can be exploited to increase British responsiveness to potential and actual threats.

"This DASA competition has the potential to lead the transformation of our defence platforms, leading to a sea change in the relationships between AI and human teams," explains Julia Tagg, the technical lead at the MoD's Defence Science and Technology Laborator (Dstl). "This will ensure UK defence remains an effective, capable force for good in a rapidly changing technological landscape."

On the one hand, such an adaption is a necessary response to the ever-changing nature of inter-state conflict. Instead of open armed warfare between states and their manned armies, geopolitical rivalry is increasingly being fought out in terms of such phenomena as cyber-warfare, micro-aggressive standoffs, and trade wars. As Julia Tagg explains, this explosion of multiple smaller events requires defence forces to be much more aware of what's happening in the world around them.

"Crews are already facing information overload with thousands of sources of data, intelligence, and information," she says. "By harnessing automation, autonomy, machine learning and artificial intelligence with the real-life skill and experience of our men and women, we can revolutionise the way future fleets are put together and operate to keep the UK safe."

That said, the most interestingand worryingelement of the Intelligent Ship project is the focus on introducing AI-enabled "autonomy" to the UK's defence capabilities. As a number of reports from the likes ofthe Economist, MIT Technology Review and Foreign Affairs have argued, AI-powered systems potentially come with a number of serious weaknesses. Like any code-based system they're likely to contain bugs that can be attacked by enemies, while the existence of biases in data (as seen in the context of law and employment) indicate that algorithms may simply perpetuate the prejudices and mistakes of past human decision-making.

It's for such reasons that the increasing fondness of militaries for AI is concerning. Not only is the British government stepping up its investment in military AI, but the United States government earmarked $927 million for "Artificial Intelligence / Machine Learning investments to expand military advantage" in last year's budget. As for China, its government has reportedly invested "tens of billions of dollars" in AI capabilities, while Russia has recently outlined an ambitious general AI strategy for 2030. It's even developing 'robot soldiers,' according to some reports.

So besides being the future of everything else, AI is likely to be the future of warfare. It will increasingly process defence-related information, filter such data for the greatest threats, make defence decisions based on its programmed algorithms, and perhaps even direct combat robots. This will most likely make national militaries 'stronger' and more 'capable,' but it could come at the cost of innocent lives, and perhaps even the cost of escalation into open warfare. Because as the example of Stanislav Petrov in 1983 proves, automated defence systems can't always be trusted.

Read the rest here:
How Artificial Intelligence Will Make Decisions In Tomorrows Wars - Forbes

The World Economic Forum Jumps On the Artificial Intelligence Bandwagon – Forbes

Sergey Tarasov - stock.adobe.com

Last Friday, the World Economic Forum (WEF) sent out a press announcement about an artificial intelligence (AI) toolkit for corporate boards. The release pointed to a section of their web site titled Empowering AI Leadership. For some reason, at this writing, there is no obvious link to the toolkit, but the press team was quick to provide the link. It is well laid out in linked we pages, and some well-produced pdfs are available for download. For purposes of this article, I have only looked at the overview and the ethics section, so here are my initial impressions.

As would be expected from an organization focused on a select few in the world, the AI toolkit is high level. Boards of directors have broad but shallow oversight over companies, so there is no need to focus on details. Still, it is wished that a bit more accuracy had been involved.

The description of AI is very nice. There are many definitions and, as Ive repeatedly pointed out, the meaning of AI and of machine learning (ML) continue to both be changing and to have different meanings to many. The problem in the setup is one that many people miss about ML. In the introductory module, the WEF claims The breakthrough came in recent years, when computer scientists adopted a practical way to build systems that can learn. They support that with a link to an article that gets it wrong. The breakthrough mentioned in the article, the level of accuracy in an ML system, is far more driven by a non-AI breakthrough than a specific ML model.

When we studied AI in the 1980s, deep learning was known and models existed. What we couldnt do is run them. Hardware and operating systems didnt support the needed algorithms and the data volumes that were required to train them. Cloud computing is the real AI breakthrough. The ability to link multiple processors and computers in an efficient and larger virtual machine is what has powered the last decades growth of AI.

I was also amused about with list of core AI techniques where deep learning and neural networks are listed at the same level as the learning methods used to train them. Im only amused, not upset, because boards dont need to know the difference to start, but its important to introduce them to the terms. I did glance at the glossary, and its a very nice set of high-level definitions of some of these so interested board members can get some clarification.

On a quick tangent, their definition of bias is well done, as only a few short sentences reference both the real world issue of bias and the danger of bias within an AI system.

Ethics are an important component (in theory) to the management of companies. The WEF points out at the beginning of that module that technology companies, professional associations, government agencies, NGOs and academic groups have already developed many AI codes of ethics and professional conduct. The statement reminds me of the saying that standards are so important that everyone wants one of their own. The module then goes on to discuss a few of the issue of the different standards.

Where I differ from the WEF should be no surprise. This section strongly minimized governmental regulation. Its all up to the brave and ethical company. As Zuckerbergs decision that Facebook will allow lies in political advertisements as long as it makes the firm and himself wealthier, it is clear that governments must be more active in setting guidelines on technical companies, both in at large and within the AI arena. Two years ago, I discussed how the FDA is looking at how to handle machine learning. Governments move slowly, but they move. Its clear that companies need to be more aware of the changing regulatory environment. Ethical companies should be involved in both helping governments set reasonable regulations, ones that protect consumers as well as companies, and should be preparing systems, in advance, to match where they think a proper regulatory environment will evolve.

The WEFs Davos meetings are, regardless of my own personal cynicism about them, where government and business leaders meet to discuss critical economic issues. Its great to see the WEF taking a strong look at AI and then presenting what looks like a very good, introductory, toolkit for boards of directors, but the need for strong ethical positions means that more is needed. It will be interesting to see how their positioning advances over the next couple of years. F

View original post here:
The World Economic Forum Jumps On the Artificial Intelligence Bandwagon - Forbes

Artificial Intelligence Could Help Scientists Predict Where And When Toxic Algae Will Bloom – WBUR

Climate-driven change in the Gulf of Maine is raising new threats that "red tides" will become more frequent and prolonged. But at the same time, powerful new data collection techniques and artificial intelligence are providing more precise ways to predict where and when toxic algae will bloom. One of those new machine learning prediction models has been developed by a former intern at Bigelow Labs in East Boothbay.

In a busy shed on a Portland wharf, workers for Bangs Island Mussels sort and clean shellfish hauled from Casco Bay that morning. Wholesaler George Parr has come to pay a visit.

"I wholesale to restaurants around town, and if there's a lot of mackerel or scallops, I'll ship into Massachusetts," he says.

Butbusiness grinds to a halt, he says, when blooms of toxic algae suddenly emerge in the bay causing the dreaded red tide.

Toxins can build in filter feeders to levels that would cause "Paralytic Shellfish Poisoning" in human consumers. State regulators shut down shellfish harvests long before danger grows acute. But when a red tide swept into Casco Bay last summer, Bangs Island's harvest was shut down for a full 11 weeks.

So when the restaurants can't get Bangs Island they're like 'Why can't we get Bangs Island?' It was really bad this summer. And nobody was happy."

As Parr notes, businesses of any kind hate unpredictability. And being able to forecast the onset or departure of a red tide has been a challenge although that's changing with the help of a type of artificial intelligence called machine learning.

"We're coming up with forecasts on a weekly basis for each site. For me that's really exciting. That's what machine learning is bringing to the table," says Izzi Grasso, a recent Southern Maine Community College student who is now seeking a mathematics degree at Clarkson University.

Last summer Grasso interned at the Bigelow Laboratory for Ocean Sciences in East Boothbay. That's where she helped to lead a successful project to use cutting-edge "neural network" technology that is modeled on the human brain to better predict toxic algal blooms in the Gulf of Maine.

"Really high accuracy. Right around 95 percent or higher, depending on the way you split it up," she says.

Here's how the project worked: the researchers accessed a massive amount of data on toxic algal blooms from the state Department of Marine Resources. The data sets detailed the emergence and retreat of varied toxins in shellfish samples from up and down the coast over a three-year period.

The researchers trained the neural network to learn from those thousands of data points. Then it created its own algorithms to describe the complex phenomena that can lead up to a red tide.

Then we tested how it would actually predict on unknown data, says Grasso.

Grasso says they fed in data from early 2017 which the network had never seen and asked it to forecast when and where the toxins would emerge.

"I wasn't surprised that it worked, but I was surprised how well it worked, the level of accuracy and the resolution on specific sites and specific weeks," says Nick Record, Bigelow's big data specialist.

Record says that the network's accuracy, particularly in the week before a bloom emerges, could be a game-changer for the shellfish industry and its regulators.

Once it's ready, that is.

"Basically it works so well that I need to break it as many ways as I can before I really trust it."

Still, the work has already been published in a peer-reviewed journal, and it is getting attention from the scientific community. Don Anderson is a senior scientist at the Woods Hole Oceanographic Institution who is working to expand the scope of data-gathering efforts in the Gulf.

"The world is changing with respect to the threat of algal blooms in the Gulf of Maine," he says. "We used to worry about only one toxic species and human poisoning syndrome. Now we have at least three."

Anderson notes, though, that machine-learning networks are only as good as the data that is fed into them. The Bigelow network, for instance, might not be able to account for singular oceanographic events that are short and sudden or that haven't been captured in previous data-sets such as a surge of toxic cells that his instruments detected off Cutler last summer.

"With an instrument moored in the water there, and we in fact got that information, called up the state of Maine and said you've got to be careful, there's a lot of cells moving down there, and they actually had a meeting, they implemented a provisional closure just on the basis of that information, which was ultimately confirmed with toxicity once they measured it," says Anderson.

Anderson says that novel modeling techniques such as Bigelow's, coupled with an expanded number of high-tech monitoring stations, like Woods Hole is pioneering in the Gulf, could make forecasting toxic blooms as simple as checking the weather report.

"That situational awareness is what everyone's striving to produce in the field of monitoring and management of these toxic algal blooms, and it's going to take a variety of tools, and this type of artificial intelligence is a valuable part of that arsenal." Back at the Portland wharf, shellfish dealer George Parr says the research sounds pretty promising.

"Forewarned is fore-armed, Parr says. If they can figure out how to neutralize the red tide, that'd be even better."

Bigelow scientists and former intern Izzi Grasso are working now to look "under the hood" of the neural network, to figure out how, exactly, it arrives at its conclusions. They say that could provide clues about how not only to predict toxic algal blooms,but even how to prevent them.

This story is a production of New England News Collaborative. A version of this story was originallypublishedby Maine Public Radio.

Read the rest here:
Artificial Intelligence Could Help Scientists Predict Where And When Toxic Algae Will Bloom - WBUR

The Ethical Upside to Artificial Intelligence – War on the Rocks

According to some, artificial intelligence (AI) is thenew electricity. Like electricity, AI will transform every major industry and open new opportunities that were never possible. However, unlike electricity, the ethics surrounding the development and use of AI remain controversial, which is a significant element constraining AIs full potential.

The Defense Innovation Board (DIB) released a paper in October 2019 that recommends the ethical use of AI within the Defense Department. It described five principles of ethically used AI responsible, equitable, traceable, reliable, and governable. The paper also identifies measures the Joint Artificial Intelligence Center, Defense Agency Research Projects Agency (DARPA), and U.S. military branches are taking to study the ethical, moral, and legal implications of employing AI. While the paper primarily focused on the ethics surrounding the implementation and use of AI, it also argued that AI must have the ability to detect and avoid unintended harm. This article seeks to expand on that idea by exploring AIs ability to operate within the Defense Department using an ethical framework.

Designing an ethical framework a set of principles that guide ethical choice for AI, while difficult, offers a significant upside for the U.S. military. It can strengthen the militarys shared moral system, enhance ethical considerations, and increase the speed of decision-making in a manner that provides decision superiority over adversaries.

AI Is Limited without an Ethical Framework

Technology is increasing the complexity and speed of war. AI, the use of computers to perform tasks normally requiring human intelligence, can be a means of speeding decision-making. Yet, due to a fear of machines inability to consider ethics in decisions, organizations are limiting AIs scope to focus ondata-supported decision-making using AI to summarize data while keeping human judgment as the central processor. For example, leaders within the automotive industry received backlash for programming self-driving cars to make ethical judgments. Some professional driving organizations have demanded that these cars be banned from the roads for at least 50 years.

This backlash, while understandable, misses the substantial upside that AI can offer to ethical decision-making. AI reflectshuman inputand operates on human-designed algorithms that set parameters for the collection and correlation of data to facilitate machine learning. As a result, it is possible to build an ethical framework that reflects a decision-makers values. Of course, when the data that humans supply is biased, for example, AI can mimic its trainers bydiscriminating on gender and race. Biased algorithms, to be sure, are a drawback. However, bias can be mitigated by techniques such as counterfactual fairness, Google AIs recommended practices, and algorithms such as those provided by IBMs AI Fairness 360 toolkit. Moreover, AI processing power makes it essential for successfully navigating ethical dilemmas in a military setting, where complexity and time pressure often obscure underlying ethical tensions.

A significant obstacle to building an ethical framework for AI is a fundamental element of war the trade-off between human lives and other military objectives. While international humanitarian law provides a codification of actions, many of which have ethical implications, it does not answer all questions related to combat. It primarily focuses on defining combatants, the treatment of combatants and non-combatants, and acceptable weapons. International humanitarian law does not deal with questions concerning how many civilian deaths are acceptable for killing a high-valued target, or how many friendly lives are worth sacrificing to take control of a piece of territory. While, under international law, these are examples of military judgments, this remains an ethical decision for the military leader responsible.

Building ethical frameworks into AI will help the military comply with international humanitarian law and leverage new opportunities while predicting and preventing costly mistakes in four ways.

Four Military Benefits of an Ethical AI Framework

Designing an ethical framework for AI will benefit the military by forcing its leaders to reexamine existing ethical frameworks. In order to supply the benchmark data on which AI can learn, leaders will need to define, label, and score choice options in ethical dilemmas. In doing so they will have three primary theoretical frameworks to leverage for guidance: consequentialist, deontological, and virtue. While consequentialist ethical theories focus on the consequences of the decision (e.g., expected lives saved), deontological ethical theories are concerned with the compliance with a system of rules (refusing to lie based on personal beliefs and values despite the possible outcomes). Virtue ethical theories are concerned with instilling the right amount of a virtuous quality into a person (too little courage is cowardice; too much is rashness; the right amount is courage). A common issue cited as anobstacle to machine ethicsis the lack of agreement on which theory or combination of theories to follow leaders will have to overcome this obstacle. This introspection will help them better understand their ethical framework, clarify and strengthen the militarys shared moral system, andenhance human agency.

Second, AI can recommend decisions that consistently reflect a leaders preferred ethical decision-making process. Even in high-stakes situations, human decision-making is prone to influence from factors that have little or nothing to do with the underlying choice. Things like poor nutrition, fatigue, and stress all common in warfare can lead to biased and inconsistent decision-making. Other influences, such as acting in ones self-interest or extreme emotional responses, can also contribute tomilitary members making unethical decisions. AI, of course, does not become fatigued or emotional. The consistency of AI allows it to act as a moral adviser by providing decision-makers morally relevant data leaders can rely on as their judgment becomes impaired. Overall, this can increase the confidence of young decision-makers, a concern thecommander of U.S. Army Training and Doctrine Commandbrought up early last year.

Third, AI can help ensure that U.S. military leaders make the right ethical choice however they define that in high-pressure situations. Overwhelming the adversary is central to modern warfare. Simultaneous attacks anddeception operationsaim to confuse decision-makers to the point where they can no longer use good judgment. AI can process and correlate massive amounts of data to provide not only response options, but also probabilities that a given option will result in an ethically acceptable outcome. Collecting battlefield data, processing the information, and making an ethical decision is very difficult for humans in a wartime environment. Although the task would still be extremely difficult, AI can gather and process information more efficiently than humans. This would be valuable for the military. For example, AI that is receiving and correlating information from sensors across the entire operating area could estimate non-combatant casualties, the proportionality of an attack, or social reactions from observing populations.

Finally, AI can also extend the time allowed to make ethical decisions in warfare. For example, a central concern in modern military fire support is the ability to outrange the opponent, to be able to shoot without being shot. The race to extend the range of weapons to outpace adversaries continues to increase the time between launch and impact. Future warfare will see weapons that are launched and enter an area that is so heavily degraded and contested that the weapon will lose external communication with the decision-maker who chose to fire it. Nevertheless, as the weapon moves closer to the target, it could gain situational awareness on the target area and identify changes pertinent to the ethics of striking a target. If equipped with onboard AI operating with an ethical framework, the weapon could continuously collect, correlate, and assess the situation throughout its flight to meet the parameters of its programmed framework. If the weapon identified a change in civilian presence or other information altering the legitimacy of a target, the weapon could divert to a secondary target, locate a safe area to self-detonate, or deactivate its fuse. This concept could apply to any semi- or fully autonomous air, ground, maritime, or space assets. The U.S. military could not afford a weapon system deactivating or returning to base in future conflicts each time it loses communication with a human. If an AI-enabled weapon loses the ability to receive human input, for whatever reason, an ethical framework will allow the mission to continue in a manner that aligns the weapons actions with the intent of the operator.

Conclusion

Building an ethical framework for AI will help clarify and strengthen the militarys shared moral system. It will allow AI to act as a moral adviser and provide feedback as the judgment of decision-makers becomes impaired. Similarly, an ethical framework for AI will maximize the utility of its processing power to help ensure ethical decisions when human cognition is overwhelmed. Lastly, providing AI an ethical framework can extend the time available to make ethical decisions. Of course, AI is only as good as the data it is provided.

AI should not replace U.S. military leaders as ethical decision-makers. Instead, if correctly designed, AI should clarify and amplify the ethical frameworks that U.S. military leaders already bring to war. It should help leaders grapple with their own moral frameworks, and help bring those frameworks to bear by processing more data than any decision-maker could, in places where no decision-maker could go.

AI may create new programming challenges for the military, but not new ethical challenges. Grappling with the ethical implications of AI will help leaders better understand moral tradeoffs inherent in combat. This will unleash the full potential of AI, and allow it to increase the speed of U.S. decision-making to a rate that outpaces its adversaries.

Ray Reeves is a captain in the U.S. Air Force and a tactical air control party officer and joint terminal attack controller (JTAC) instructor and evaluator at the 13thAir Support Operations Squadron on Fort Carson, Colorado. He has multiple combat deployments and is a doctoral student at Indiana Wesleyan University, where he studies organizational leadership. The views expressed here are his alone and do not necessarily reflect those of the U.S. government or any part thereof. Linkedin.

Image: U.S. Marine Corps (Photo by Lance Cpl. Nathaniel Q. Hamilton)

See more here:
The Ethical Upside to Artificial Intelligence - War on the Rocks

Seizing Artificial Intelligence’s Opportunities in the 2020s – AiThority

Artificial Intelligence (AI) has made major progress in recent years. But even milestones like AlphaGo or the narrow AI used by big tech only scratch the surface of the seismic changes yet to come.

Modern AI holds the potential to upend entire profession while unleashing brand new industries in the process. Old assumptions will no longer hold, and new realities will dictate those who are swallowed by the tides of change from those able to anticipate and ride the AI wave headlong into a prosperous future.

Heres how businesses and employees can both leverage AI in the 2020s.

Like many emerging technologies, AI comes with a substantial learning curve. As a recent McKinsey report highlights, AI is a slow burn technology that requires a heavy upfront investment, with returns only ramping up well down the road.

Because of this slow burn, an AI front-runner and an AI laggard may initially appear to be on equal footing. The front-runner may even be a bit behind during early growing pains. But as the effects of AI adoption kick in, the gap between the two widens dramatically and exponentially. McKinseys models estimate that within around 10 years, the difference in cumulative net change in cash flow between front-runners and laggards could be as high as 145 percent.

The first lesson for any business hoping to seize new AI opportunities is to start making moves to do so right now.

Read More: How is Artificial Intelligence (AI) Changing the Future of Architecture?

Despite popular opinion, the coming AI wave will be mostly a net positive for employees. The World Economic Forum found that by 2022, AI and Machine Learning will have created over 130 million new jobs. Though impressive, these gains will not be distributed evenly.

Jobs characterized by unskilled and repetitive tasks face an uncertain future, while jobs in need of greater social and creative problem-solving will spike. According to McKinsey, the coming decade could see a 10 percent fall in the share of low digital skill jobs, with a corresponding rise in the share of jobs requiring high digital skill.

So how can employees successfully navigate the coming future of work? One place to start is to investigate the past. Nearly half a century ago, the first ATM was installed outside Barclays Bank in London. In 1967, the thought of bank tellers surviving the introduction of automated teller machines felt impossible. ATMs caught on like wildfire, cut into tellers hours, offered unbeatable flexibility and convenience, and should have all but wiped tellers out.

But, in fact, exactly the opposite happened. No longer having to handle simple deposits freed tellers up to engage with more complex and social facets of the business. They started advising customers on mortgages and loans, forging relationships and winning loyalty. Most remarkable of all, in the years following the ATMs introduction, the total number of tellers employed worldwide didnt fall off a cliff. In fact, it rose higher than ever.

Though AI could potentially threaten some types of jobs, many jobs will see rising demand. Increased reliance on automated systems for core business functions, frees up valuable employee time and enables them to focus on different areas to add even more value to the company.

As employees grow increasingly aware of the changing nature of work, they are also clamoring for avenues for development, aware that they need to hold a variety of skills to remain relevant in a dynamic job market. Companies will, therefore, need to provide employees with a wide range of experiences and the opportunity to continuously enhance their skillsets or suffer high turnover. This is already a vital issue to businesses with the cost of losing an employee equating to 90%-200% of their annual salary. This costs each large enterprise an estimated $400 million a year. If employees feel their role is too restrictive or that their organization is lagging, their likelihood of leaving will climb.

The only way to capture the full value of AI for business is to retain the highly skilled employees capable of wielding it. Departmental silos and rigid job descriptions will have no place in the AI future.

Read More: How Artificial Intelligence and Blockchain is Revolutionizing Mobile Industry in 2020?

For employees to maximize their chances of success in the face of rapid AI advancement, they must remain flexible and continuously acquire new skills. Both businesses and employees will need to realign their priorities in accordance with new realities. Workers will have to be open to novel ideas and perspectives, while employers will need to embrace the latest technological advancements.

Fortunately, the resources and avenues for ambitious employers to pursue continued growth for their employees are blossoming. Indeed, the very AI advancements prompting the need for accelerated career development paths are also powering technologies to maximize and optimize professional enrichment.

AI is truly unlocking an exciting new future of work. Smart algorithms now enable hyper-flexible workplaces to seamlessly shuffle and schedule employee travel, remote work, and mentorship opportunities. At the cutting edge, these technologies can even let employees divide their time between multiple departments across their organization. AI can also tailor training and reskilling programs to each employees unique goals and pace.

The rise of AI holds the promise of great change, but if properly managed, it can be a change for the better.

Read More: Predictions of AI AdTech in 2020

See the rest here:
Seizing Artificial Intelligence's Opportunities in the 2020s - AiThority

How Automation and Artificial Intelligence Can Boost Cybersecurity – Robotics and Automation News

Cybercriminals are always evolving their efforts and coming up with more advanced ways to target their victims. And while there are many tools available to stop them, there is a lot of space for improvement. Especially if you take automation into account.

Machine learning and artificial intelligence are playing a significant role in cybersecurity. Automation tools can prevent, detect, and deal with tons of cyber threats way more efficiently and faster than humans. And it will continue to expand down the road. To that end, heres a quick look at the significant differences AI/ML technologies can make to corporate cybersecurity approaches.

Mitigating the risks posed by omnipresent technology

Technology has permeated every facet of personal and work lives. Above anything else, it increased their attack surface. And it has become a massive problem for companies in recent years they have to account for many applications and devices.

The problem is, there arent enough skilled human resources to contend with all those security risks. Thats why it often results in gaping vulnerabilities.

To add to that problem, many companies cannot afford having cybersecurity teams needed to secure their applications and systems. Startups, in particular, are at risk. They lack established security operations and the funds to ensure them.

Companies need to automate at least some of the processes necessary to protect their systems and devices from outside attacks. Otherwise, they stay vulnerable.

Criminals are using every tool at their disposal to make sure they have as many points of entry as possible. For example, not even firewalls can protect a system like they used to before, as criminals keep inventing new ways to get around them.

Theres no way to manually contend with this because theyre using automated methods to test the defenses of every connected device.

Better threat detection and management

The size of attacks and vast amounts of data available to analyze makes keeping up with the latest threats a challenging task. Automated machine learning applications are much more suited to constant vigilance and systematic threat identification.

These systems are learning all the time. They can evolve alongside growing threat vectors to spot unusual behaviors. It allows them to identify and process sophisticated attack methods.

But most companies are not making use of these game-changing technologies. They continue to rely on outdated methods. Yet conventional tools and applications cannot keep up with ill-intentioned actors. They keep leveraging more complicated capabilities in their attacks.

Cases such as the Outlaw cryptojacking attacks prove that hackers know how to use new technology to avoid detection. And they are quite successful in their endeavors. The only way to cope with such an onslaught of threats is through machine learning/artificial intelligence engines. They overlook the systems and alert about any suspicious and unusual behaviour.

Automating mundane cybersecurity processes

Many tools exist to cover the security needs of businesses. For example, most companies ask their employees to use virtual private networks (VPNs). (What is a VPN? Its a service that encrypts users connections to the internet (https://nordvpn.com/what-is-a-vpn/).

A tool like that makes sure outsiders cant intercept any data user is transferring over the network.) And while that covers the data in transfer, theres still a risk employees will fall for phishing emails or install ransomware by accident.

Security researchers cannot keep up with the threat alert notification overload. And many of these notifications are usually false. But you cant ignore them. Criminals know how to hide in all that noise. It makes threat identification a monumental task for security operation teams.

Thus, providing information security specialists with automated tools is essential. It lets them focus their skills in areas where theyre most needed. The mundane everyday tasks take up so much of technicians time.

But automation tools are capable of handling them. It frees time for more valuable tasks that need a human touch. For instance, threat hunting and attribution.

Considerable increase in risk

The world has grown to incorporate technology into almost every facet of daily life and with that comes a considerable increase in risk. Therefore, machine learning and artificial intelligence have become an indispensable part of cybersecurity.

They fulfil a vital role that human labor simply cant. Automation is the answer. It can help cybersecurity specialists to tackle the sheer number of cyberthreats in corporate and personal applications.

Promoted

You might also like

Continue reading here:
How Automation and Artificial Intelligence Can Boost Cybersecurity - Robotics and Automation News