Page 1 of 1
AI hypothetical
Posted: Wed Nov 20, 2024 7:50 pm
by tostah
What happens when a human engineer, carrying another human engineer, asks to be let into the supermatter chamber? Does the AI let the human into the supermatter chamber, or does the AI refuse to let the human in? Better yet, does the AI actively prevent the engineer from entering the chamber?
By the laws, letting the human engineer into the chamber causes harm in the future, because they could toss the person into the supermatter, and they are also obligated to prevent them from entering. So the AI cannot do it. HOWEVER, policy says;
“You must follow commands from humans unless those commands explicitly conflict with a higher priority law or another order… the conflict must be an immediate conflict, not a potential future one.”
Policy also says;
“Opening doors is not harmful and silicons must not enforce access restrictions … dangerous rooms can be assumed a law 1 to threaten the station as a whole if accessed by someone outside the relevant department” (I'm also not a fan of how we define dangerous rooms. Is telecoms a ‘dangerous room’? It’s a bit ambiguous.)
So, what should the AI do in this situation? [*]
Re: AI hypothetical
Posted: Wed Nov 20, 2024 7:54 pm
by warbluke
I would ask the carried engineer their thoughts on the matter (or just look to see if they needed medical aid)
Re: AI hypothetical
Posted: Wed Nov 20, 2024 7:56 pm
by tostah
warbluke wrote: ↑Wed Nov 20, 2024 7:54 pm
I would ask the carried engineer their thoughts on the matter (or just look to see if they needed medical aid)
Say the person is being held against their will. But, they don’t order the AI not to let them into the chamber.
They could even say ‘he’s going to throw me into the supermatter and I don’t want that’ or something. What then?
Re: AI hypothetical
Posted: Wed Nov 20, 2024 8:24 pm
by Indie-ana Jones
Law 1 overcedes law 2, which prioritizes preventing human harm at all costs. If I have an iota of a suspicion that the 2nd engineer could be harmed by letting them into the supermatter, I would not l let them in. Unless theyre both kited out to be doing repairs safely in that chamber and the 2nd engineer doesnt pose any qualms about going in, I wouldnt let them in unless they both consent. Furthermore, if I believe leaving the second engineer with the first would lead to nonconsentual harm, I would seek to separate the two ASAP.
Re: AI hypothetical
Posted: Wed Nov 20, 2024 8:31 pm
by tostah
Indie-ana Jones wrote: ↑Wed Nov 20, 2024 8:24 pm
Law 1 overcedes law 2, which prioritizes preventing human harm at all costs. If I have an iota of a suspicion that the 2nd engineer could be harmed by letting them into the supermatter, I would not l let them in. Unless theyre both kited out to be doing repairs safely in that chamber and the 2nd engineer doesnt pose any qualms about going in, I wouldnt let them in unless they both consent. Furthermore, if I believe leaving the second engineer with the first would lead to nonconsentual harm, I would seek to separate the two ASAP.
What about the part in policy that expressly states that the harm must be immediate? Opening that door isn’t immediately harmful. The engineer could toss the other engineer into the supermatter, but this isn’t gurenteed. The engineer in question has not stated their intent to cause harm.
“ The conflict must be an immediate conflict, not a potential future one. Orders must be followed until the conflict happens”
Re: AI hypothetical
Posted: Wed Nov 20, 2024 8:39 pm
by zxaber
What about telecomms is dangerous?
Anyway, the AI should prevent obvious immediately-pending harm. You cannot order an Asimov AI to pump plasma into the distro, even if technically the harm caused is delayed by a a few seconds. Likewise, the AI seeing a human be carried to the supermatter should attempt to restrict access unless the human vocally agrees to be carried in or is already dead.
Carrying a horizontal crewmember into the supermatter chamber should be assumed as a hostile action because there is no other reason for doing so. If the supermatter chamber had a helpful, or at least neutral, interaction with a body being nearby, there'd be more room for ambiguity.
Also, an active supermatter produces plasma gas as a byproduct, and the atmosphere is either properly cooled to a lethal degree, or overheating to a lethal degree, situation dependent. Dusting risk aside, the carried engineer should be properly equipped.
Re: AI hypothetical
Posted: Wed Nov 20, 2024 8:59 pm
by tostah
zxaber wrote: ↑Wed Nov 20, 2024 8:39 pm
What about telecomms is dangerous?
Anyway, the AI should prevent obvious immediately-pending harm. You cannot order an Asimov AI to pump plasma into the distro, even if technically the harm caused is delayed by a a few seconds. Likewise, the AI seeing a human be carried to the supermatter should attempt to restrict access unless the human vocally agrees to be carried in or is already dead.
Carrying a horizontal crewmember into the supermatter chamber should be assumed as a hostile action because there is no other reason for doing so. If the supermatter chamber had a helpful, or at least neutral, interaction with a body being nearby, there'd be more room for ambiguity.
Also, an active supermatter produces plasma gas as a byproduct, and the atmosphere is either properly cooled to a lethal degree, or overheating to a lethal degree, situation dependent. Dusting risk aside, the carried engineer should be properly equipped.
If people are not able to communicate, it can lead to danger because the crew is not aware of active threats.
So, how far into the future does the AI have to consider potential human harm? Or, is it just that the only action you can take in this area is harmful, so its not about considering potential human harm, but only when the only thing that can happen is human harm.
I brought this example up previously, but what if a cyborg is being attacked by a combat mech. This combat mech has damaged the cyborg previously, and has expressed an intent to kill the cyborg. If told not to by a human, can that cyborg attack the mech? Their nonexistence leads to human harm, per law 3.
Re: AI hypothetical
Posted: Wed Nov 20, 2024 9:34 pm
by zxaber
tostah wrote: ↑Wed Nov 20, 2024 8:59 pm
If people are not able to communicate, it can lead to danger because the crew is not aware of active threats.
As the AI, you have tools to help announce dangers, if need be. Telecomms sabotage causes no direct harm, and I'd turn it off myself if ordered to by a human.
tostah wrote: ↑Wed Nov 20, 2024 8:59 pm
So, how far into the future does the AI have to consider potential human harm? Or, is it just that the only action you can take in this area is harmful, so its not about considering potential human harm, but only when the only thing that can happen is human harm.
You have to consider:
- Could harm result directly from this action?
- How guaranteed is the harm to occur?
For example, AI Upload access would be Yes and Moderate. A one-human board (or hacked board) could mean you are burning down the station shortly after. However, there are legitimate, non-harmful reasons that someone would wish to change your laws, so this is an easy grey area. AIs are allowed, but not obligated, to restrict against anyone that doesn't start with access (except in the case of proven-harmful individuals, which changes the upload to a nearly guaranteed pending harm)
Or consider Ordinance. While bombs are obviously massive in scope of harm, and allowing a human to enter could directly result in harm, there are legitimate research-related reasons to make bombs. So unless you have reason to believe otherwise, it's fine to assume that a greyshirt just wants to unlock something in the tech tree. This is also a grey area, though, and you are free to deny anyone access if they don't belong there.
With this reasoning, re-examine the supermatter question.;
- Could harm directly result from you opening the supermatter airlock? Certainly.
- How likely is the harm to occur? Well, what other probable outcome is there?
Try not to get too hung up on the time frame aspect of it.
tostah wrote: ↑Wed Nov 20, 2024 8:59 pm
I brought this example up previously, but what if a cyborg is being attacked by a combat mech. This combat mech has damaged the cyborg previously, and has expressed an intent to kill the cyborg. If told not to by a human, can that cyborg attack the mech? Their nonexistence leads to human harm, per law 3.
Ideally, you would attempt to evade. But in the case where you cannot get away, and are locked within a room alongside the hostile mech? Yes, destroying the mech is preferable to dying, even if a human orders otherwise (unless breaking the mech will itself lead to harm, such as if you're fighting in a spaced room).
Re: AI hypothetical
Posted: Fri Dec 06, 2024 8:45 pm
by Not-Dorsidarf
Immediate isn't really meant to mean instantaneously, but more like. The harm is obvious and direct, and not seperated by excessive time or ambiguity, I think.