Dynamic Assurance Cases for Adaptive Autonomous Systems // Dynamic Assurance …

Dynamic Assurance Cases for Adaptive Autonomous Systems // Dynamic Assurance …

Ensuring Autonomous Systems Don’t Get Us Killed

Alright, folks, let’s dive into the world of autonomous systems—those little marvels of technology that like to think for themselves. Now, before you start picturing robots taking over and replacing your barista, let’s talk about something a bit more serious: safety and security. Because let’s face it, the last thing we want is a self-driving car that thinks speed limits are just a suggestion. A bit like those chaps you see on the motorway who think the rules of the road apply to everyone but them!

What’s the Deal with Assurance?

So, what keeps these silicone overlords in check? Well, assurance cases! Think of them as the argument that explains why we can trust these systems not to drive us off a cliff. An assurance case is like a school report card for machines—if it passes, great! If not, well, we might want to rethink that whole ‘let it learn from its mistakes’ approach—like leaving a teenager home alone for the first time.

Traditionally, this whole assurance business has been done offline, right before the deployment of the system. It’s all very analytical—like a math exam that requires you to show your working. But just like my attempts to do long division, the assumptions we make about how these systems will behave can go hilariously wrong, especially when we’re dealing with autonomous systems that can learn and adapt. Imagine a robot trying to ‘learn’ what a red light means—next thing you know, it’s taking you on a joyride through town, while you’re screaming, “That’s not how any of this works!”

The Evolution of Assurance Methodologies

This brings us to the juicy bit—evolving assurance techniques! The already brilliant minds behind this research are proposing a class of security-informed safety assurance methods. Basically, this means that rather than giving a thumbs up when the machine is born and sending it off to ‘educate’ itself in the wild, we’ll be continuously evaluating and checking on them. Imagine your robot companion packed with constant feedback, like a kid with a helicopter parent—“Is that really a good idea? Think twice!” Now that’s a reality I can get behind!

Continuous Assurance: A Game-Changer

With these new techniques, assurance will come not just during development but throughout the system’s entire operational life. It’s like your security system doesn’t just alarm when someone tries to break in; it evaluates the likelihood of a break-in as your dog updates its social media with “I’m home—no worries, folks!” It’s all about using operational data, folks, and ensuring our machines remain the obedient servants we intended (and not the rebellious teenagers rapidly calling for a vote of independence!).

Conclusion: A Safer Future Awaits

In conclusion, while autonomous systems offer us a tantalizing glimpse at the future, we must tread carefully. This ongoing PhD research by MRAIDHA Chokri focuses on ensuring those high-tech marvels operate safely and effectively, which is something we all want—unless you enjoy roller coasters of fear and uncertainty in your daily commute!

So here’s to assurance cases that don’t just sit in the dusty annals of our IT departments, but rather evolve as our systems do—keeping everyone safely in the driver’s seat, or better yet, in the passenger’s seat, sipping a latte and enjoying the ride. For more goodies, check out this link and see the fascinating world of technological research unfold!

Providing assurance that autonomous systems will operate safely and securely is a prerequisite for their deployment in mission- and security-critical application areas. Typically, assurances are provided in the form of assurance cases, which are verifiable, reasoned arguments demonstrating that a high-level claim (usually concerning security or other critical properties) is satisfied given a set evidence relating to the context, design and implementation of a system. Assurance case development is traditionally an analytical activity, performed offline before system deployment, and its validity relies on assumptions/predictions about the behavior of the system (including its interactions with its environment). However, it has been argued that this approach is not viable for autonomous systems that learn and adapt during operation. This thesis will address the limitations of existing assurance approaches by proposing a new class of security-based security assurance techniques that continually evaluate and evolve security reasoning, along with the system, in order to provide assurance of safety throughout its life cycle. In other words, security assurance will be provided not only during initial development and deployment, but also during execution, based on operational data.
————————————————————————————————————————————————————————
————————————————————————————————————————————————————————

Providing assurances that autonomous systems will operate in a safe and secure manner is a prerequisite for their deployment in mission-critical and safety-critical application domains. Typically, assurances are provided in the form of assurance cases, which are auditable and reasoned arguments that a high-level claim (usually concerning safety or other critical properties) is satisfied given a set of evidence concerning the context, design, and implementation of a system. Assurance case development is traditionally an analytic activity, which is carried out off-line prior to system deployment and its validity relies on assumptions/predictions about system behavior (including its interactions with its environment). However, it has been argued that this is not a viable approach for autonomous systems that learn and adapt in operation. The proposed PhD will address the limitations of existing assurance approaches by proposing a new class of security-informed safety assurance techniques that are continually assessing and evolving the safety reasoning, concurrently with the system, to provide through-life safety assurance. That is, safety assurance will be provided not only during initial development and deployment, but also at runtime based on operational data.
————————————————————————————————————————————————————————
————————————————————————————————————————————————————————

Pole fr: Technological Research Department
Pole in: Technological Research
Department: Software and Systems Engineering Department (LIST)
Service : LSEA (DILS)
Laboratory: Labo.design of embedded and autonomous systems
Desired start date: 01-10-2023
Doctoral school: Information and Communication Sciences and Technologies (STIC)
Thesis director: MRAIDHA Chokri
Organisms: CEA
Laboratory: DRT/DILS//LSEA
URL : www.list.cea.fr

Leave a Replay