Primarily for general aviation discussion, but other aviation topics are also welcome.
By riverrock
FLYER Club Member  FLYER Club Member
#1854060
The problem with AI and safety, is that as it's not a fully deterministic system, you can't "prove" it's safe. You have to create a model, then test it to oblivion. If testing brings up any weird behaviour and update to the model requires full testing again. So you send up with an almost deterministic model ( as that's what you test) losing the flexibility that AI can provide, and you're back to square one - only able to handle pre-prepared emergencies.
User avatar
By Morten
FLYER Club Member  FLYER Club Member
#1854066
Sounds very similar to how it is "proven" that meatware is safe. Except that, with AI, you can be relatively safe in the knowledge that it will learn from its mistakes, react the same to the same inputs as it did last time and not be distracted by the cabin crew.
User avatar
By Flyin'Dutch'
FLYER Club Member  FLYER Club Member
#1854069
Morten wrote:Sounds very similar to how it is "proven" that meatware is safe. Except that, with AI, you can be relatively safe in the knowledge that it will learn from its mistakes, react the same to the same inputs as it did last time and not be distracted by the cabin crew.


For that to work the AI needs to survive.

If it doesn't:
Image
By riverrock
FLYER Club Member  FLYER Club Member
#1854074
The AIs generally available need you to input huge amounts of data, you process that data with lots of computing power to generate a model, then you use that model. The model is essentially deterministic - same inputs will give you same outputs, until the model is generated again. There isn't any on-the-fly learning. The more complex the model, the longer it takes to regenerate. There isn't any particular magic in it.

Neural networks are being used most often to model complex situations as a shortcut to complex manual coding, to provide hints or tips to users rather than automatically doing things.
HedgeSparrow liked this
User avatar
By Morten
FLYER Club Member  FLYER Club Member
#1854077
I wasn't trying to suggest that AI would learn 'mid crisis' but just that an AI defined system would, at any point, be at its 'peak' and have complete and instantaneous access to everything it had 'learnt' (or had programmed into it, if you want). Which is not always how humans work.
There are certainly examples of excellent human ingenuity saving the day, but sadly far more examples of the opposite.

"Proving" that something complex is "safe", whether it is meatware, hardware or software, is, at this level of complexity, as difficult as the system you are analysing, if not more so...
User avatar
By kanga
#1854080
Charles Hunt wrote:.. Was it BA near Singapore where an engine exploded and almost every alarm possible went off? The crew were able to keep the a/c flying and bring it down to a successful lancing.

Could AI have done that?


Qantas:

https://en.wikipedia.org/wiki/Qantas_Flight_32

Serendipitously, there were 5 experienced pilots available on the flight deck. This enabled both sensible on-board discussion of options and useful task division while handling pilot continued handling.. :thumright:

Since both normal flight indicators and plethora of warning messages were very difficult to interpret (at all, let alone while also manual handling), more human eyes and cerebella on the spot (ie, not datalinked with inevitable latency and possible datalink overload) were clearly helpful. It would (in my likely to be obsolete technical opinion :oops: .. happy to be crrected, as ever) be challenging for on-board or 'remote' AI to be designed to anticipate and manage similar scenarios, including consequences of what started as only ".."fatigue cracking" in a stub pipe within the engine.." :?
Charles Hunt liked this
By johnm
FLYER Club Member  FLYER Club Member
#1854092
There is a growing number of self-learning AI systems which operate essentially on trial and error.

These are the systems now being deployed in autonomous vehicles and also in gaming, banking, healthcare and some aspects of robotics. The model they use is modified and extended in real time as a result of "experience".

It's fairly early days but they can already outperform humans in some areas.......
TopCat liked this
User avatar
By Josh
#1854103
The debate as to whether ultimately flying is automatable by AI is a very different one to whether single pilot operations in a certified 2 crew aircraft is possible or wise.

The point I was trying to make is there are a large number of non-technical scenarios that airline flying throws up that require 2 heads in the flight deck to manage. The ultimate decision/resolution may not require diversion but if your successful mission completion is based on large periods of uninterrupted inflight “rest” I can see it struggling. I would say between a quarter and a third of all long haul flights I was involved in provoked such situations.
As I CFIT liked this
By johnm
FLYER Club Member  FLYER Club Member
#1854120
@Josh I can see the day when such discussions are with one person on the flight deck and one on the ground, very far away.

I am less enamoured of remote piloting of airliners but only because of the access security implications.
Pete L liked this