Unfortunately, this dramatically illustrates the problem with AI. AI can be used to determine the best probable course, given certain parameters. It is precisely those parameters that are at issue. Whence those parameters? From fallible humans, of course. AI should never be given carte blanche, as it would be granting dictatorial power to the fallible humans who initiate it.
The problem with AI is not AI, but the inane mentality that imagines computers can be infallible. Anyone advocating that AI be given ultimate decision making power is either grossly ignorant of the limitations of computing* or is intent on its nefarious use.
Yes, but the problem is not with AI but with the scope of its functions, given to it by fallible people. Recent 737 Max crashes occurred in part because the control system was given the ability to override pilot input, even though its input was single failure prone.
Like the people who reject vaccines, those who reject AI's promise of improving life, are throwing the baby out with the bath water.