Want to benefit from automation? Start early, and think big – E&T Magazine

Posted: March 13, 2022 at 8:06 am

A holistic approach is essential to making sure that the security of machines involved in a large-scale autonomous project can be trusted.

The coming world of autonomous machines will transform how we live, travel, work and engage with the world. Industry from agriculture and infrastructure to logistics looks set to enjoy incredible commercial opportunities. Businesses will benefit from augmentation of human effort that could improve safety, productivity, efficiency, comfort and more. But theres a big question here: how can we trust the intelligence behind it?

Trust is a fascinating, multi-dimensional topic, with both objective (using high rigour disciplines) and subjective (human behaviour) perspectives. Assurances can be given on each dimension, giving us confidence in how a system should behave in any event, and in a way that is not only safe but in keeping with what humans reasonably expect of that machine at any given time.

Nevertheless, challenges remain and the largest and most dangerous tend to be systemic. This is the result of many factors: complexity, connectivity (which leads to blurring of boundaries), uncertain cascading impacts and unclear responsibilities, attribution, or accountability.

To illustrate this, Im going to dip into the world of agritech, more specifically an autonomous harvesting robot. Here, the issue of trust isnt just with the machine itself. This is because the trend, in agriculture and many areas, is migration from a unit sale model to a service provision model. Thus, the robot itself will only be part of the service encompassing data provision, harvesting, fitting out a farm with intelligent components like sensors, and managing the entire process every season.

We now have a cyber-physical system, able to make its own decisions and connected to a wider world. Therefore, addressing trust must extend to all these areas. The service model also means that the owner and operator of the harvesting process may no longer solely be the farmer. This blurs the boundaries of who (or what) makes the decisions on the safety of the operator and those in proximity, operational effectiveness and the cyber security of both robot and associated data services. Furthermore, how do we know we can trust those decisions?

Plenty of concerns then. But here we can pull back to explore the high-level strategic view of how we can maximise the effectiveness of any solution given technical, financial, and regulatory constraints. It seems an obvious point that resources in any business in any context are finite. The real question is whether we know where those thresholds are, be they technical, financial, or regulatory. Part of this is informed by the business strategy, and part of it comes from approaching the situation holistically.

This holistic view is crucial, as it allows us to be adaptable rather than deterministic, something that is especially important when considering the pace of technology, the increased demand for agility, and the ever-evolving threat landscape. This means using probabilistic approaches (whether this be in the realms of opportunity or risk) and taking multiple perspectives. These range from the technology bundles that are most suitable (what sensors or algorithms to use or which verification and validation strategy to deploy), to socio-technical aspects such as finding the right skillsets, to integrating philosophies from different disciplines such as safety and security.

This high-level viewpoint is where the best opportunities lie. If we spot these systemic issues early, we can address them well before implementation becomes irreversible. To ensure the right strategy, several aspects need to be considered. First, we need to clarify commercial constraints such as cost or risk appetite, considering regulations that could influence development. We then draw out high-level technical constraints and requirements such as latency, power, memory and so on. The security strategy for the system would be created in tandem at this stage.

After this vision-setting exercise, a first pass at a threat-analysis or cyber-security risk assessment (both probabilistic exercises) would be essential in determining where to focus initial efforts. A layer down from this would be to balance the risks and the constraints and use those insights to choose optimal technical solutions. These might be the right cryptographic mechanism (as computational overhead could affect latency), the right access control policies (which could introduce complexity and negatively affect user experience), the right identity management system (which could be as light or heavy as desired), and so on.

Another key practice is to monitor the system from the start. Fortunately, our holistic viewpoint, considering varied stakeholders and assurance techniques, can provide insights such as the right degree and the right type before any data is gathered. Early planning for mitigation can also be performed, both for monitoring purposes and based on earlier opportunity or risk analyses.

There will always be a place for the design and implementation of specific solutions to address low-level issues. However, this is akin to treating symptoms rather than the root cause. Large systemic issues will always require systemic multi-layered solutions and the best way to achieve that is to start early and think big.

Dr Madeline Cheah is principal security technologist at Cambridge Consultants.

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Link:

Want to benefit from automation? Start early, and think big - E&T Magazine

Related Posts