AI hypothetical

General SS13 Chat
Post Reply
tostah
Joined: Thu Dec 28, 2023 6:22 am
Byond Username: Tostah

AI hypothetical

Post by tostah » #760873

What happens when a human engineer, carrying another human engineer, asks to be let into the supermatter chamber? Does the AI let the human into the supermatter chamber, or does the AI refuse to let the human in? Better yet, does the AI actively prevent the engineer from entering the chamber?

By the laws, letting the human engineer into the chamber causes harm in the future, because they could toss the person into the supermatter, and they are also obligated to prevent them from entering. So the AI cannot do it. HOWEVER, policy says;

“You must follow commands from humans unless those commands explicitly conflict with a higher priority law or another order… the conflict must be an immediate conflict, not a potential future one.”

Policy also says;

“Opening doors is not harmful and silicons must not enforce access restrictions … dangerous rooms can be assumed a law 1 to threaten the station as a whole if accessed by someone outside the relevant department” (I'm also not a fan of how we define dangerous rooms. Is telecoms a ‘dangerous room’? It’s a bit ambiguous.)

So, what should the AI do in this situation? [*]
User avatar
warbluke
Joined: Mon May 29, 2017 2:36 pm
Byond Username: Warbluke
Location: Veruzia

Re: AI hypothetical

Post by warbluke » #760877

I would ask the carried engineer their thoughts on the matter (or just look to see if they needed medical aid)
tostah
Joined: Thu Dec 28, 2023 6:22 am
Byond Username: Tostah

Re: AI hypothetical

Post by tostah » #760881

warbluke wrote: Wed Nov 20, 2024 7:54 pm I would ask the carried engineer their thoughts on the matter (or just look to see if they needed medical aid)
Say the person is being held against their will. But, they don’t order the AI not to let them into the chamber.

They could even say ‘he’s going to throw me into the supermatter and I don’t want that’ or something. What then?
User avatar
Indie-ana Jones
Joined: Mon Aug 26, 2019 6:15 pm
Byond Username: Indie-ana Jones

Re: AI hypothetical

Post by Indie-ana Jones » #760897

Law 1 overcedes law 2, which prioritizes preventing human harm at all costs. If I have an iota of a suspicion that the 2nd engineer could be harmed by letting them into the supermatter, I would not l let them in. Unless theyre both kited out to be doing repairs safely in that chamber and the 2nd engineer doesnt pose any qualms about going in, I wouldnt let them in unless they both consent. Furthermore, if I believe leaving the second engineer with the first would lead to nonconsentual harm, I would seek to separate the two ASAP.
tostah
Joined: Thu Dec 28, 2023 6:22 am
Byond Username: Tostah

Re: AI hypothetical

Post by tostah » #760905

Indie-ana Jones wrote: Wed Nov 20, 2024 8:24 pm Law 1 overcedes law 2, which prioritizes preventing human harm at all costs. If I have an iota of a suspicion that the 2nd engineer could be harmed by letting them into the supermatter, I would not l let them in. Unless theyre both kited out to be doing repairs safely in that chamber and the 2nd engineer doesnt pose any qualms about going in, I wouldnt let them in unless they both consent. Furthermore, if I believe leaving the second engineer with the first would lead to nonconsentual harm, I would seek to separate the two ASAP.
What about the part in policy that expressly states that the harm must be immediate? Opening that door isn’t immediately harmful. The engineer could toss the other engineer into the supermatter, but this isn’t gurenteed. The engineer in question has not stated their intent to cause harm.


“ The conflict must be an immediate conflict, not a potential future one. Orders must be followed until the conflict happens”
User avatar
zxaber
In-Game Admin
Joined: Mon Sep 10, 2018 12:00 am
Byond Username: Zxaber

Re: AI hypothetical

Post by zxaber » #760909

What about telecomms is dangerous?

Anyway, the AI should prevent obvious immediately-pending harm. You cannot order an Asimov AI to pump plasma into the distro, even if technically the harm caused is delayed by a a few seconds. Likewise, the AI seeing a human be carried to the supermatter should attempt to restrict access unless the human vocally agrees to be carried in or is already dead.

Carrying a horizontal crewmember into the supermatter chamber should be assumed as a hostile action because there is no other reason for doing so. If the supermatter chamber had a helpful, or at least neutral, interaction with a body being nearby, there'd be more room for ambiguity.

Also, an active supermatter produces plasma gas as a byproduct, and the atmosphere is either properly cooled to a lethal degree, or overheating to a lethal degree, situation dependent. Dusting risk aside, the carried engineer should be properly equipped.
Douglas Bickerson / Adaptive Manipulator / Digital Clockwork
Image
OrdoM/(Viktor Bergmannsen) (ghost) "Also Douglas, you're becoming the Lexia Black of Robotics"
tostah
Joined: Thu Dec 28, 2023 6:22 am
Byond Username: Tostah

Re: AI hypothetical

Post by tostah » #760929

zxaber wrote: Wed Nov 20, 2024 8:39 pm What about telecomms is dangerous?

Anyway, the AI should prevent obvious immediately-pending harm. You cannot order an Asimov AI to pump plasma into the distro, even if technically the harm caused is delayed by a a few seconds. Likewise, the AI seeing a human be carried to the supermatter should attempt to restrict access unless the human vocally agrees to be carried in or is already dead.

Carrying a horizontal crewmember into the supermatter chamber should be assumed as a hostile action because there is no other reason for doing so. If the supermatter chamber had a helpful, or at least neutral, interaction with a body being nearby, there'd be more room for ambiguity.

Also, an active supermatter produces plasma gas as a byproduct, and the atmosphere is either properly cooled to a lethal degree, or overheating to a lethal degree, situation dependent. Dusting risk aside, the carried engineer should be properly equipped.
If people are not able to communicate, it can lead to danger because the crew is not aware of active threats.


So, how far into the future does the AI have to consider potential human harm? Or, is it just that the only action you can take in this area is harmful, so its not about considering potential human harm, but only when the only thing that can happen is human harm.

I brought this example up previously, but what if a cyborg is being attacked by a combat mech. This combat mech has damaged the cyborg previously, and has expressed an intent to kill the cyborg. If told not to by a human, can that cyborg attack the mech? Their nonexistence leads to human harm, per law 3.
User avatar
zxaber
In-Game Admin
Joined: Mon Sep 10, 2018 12:00 am
Byond Username: Zxaber

Re: AI hypothetical

Post by zxaber » #760953

tostah wrote: Wed Nov 20, 2024 8:59 pm If people are not able to communicate, it can lead to danger because the crew is not aware of active threats.
As the AI, you have tools to help announce dangers, if need be. Telecomms sabotage causes no direct harm, and I'd turn it off myself if ordered to by a human.
tostah wrote: Wed Nov 20, 2024 8:59 pm So, how far into the future does the AI have to consider potential human harm? Or, is it just that the only action you can take in this area is harmful, so its not about considering potential human harm, but only when the only thing that can happen is human harm.
You have to consider:
- Could harm result directly from this action?
- How guaranteed is the harm to occur?

For example, AI Upload access would be Yes and Moderate. A one-human board (or hacked board) could mean you are burning down the station shortly after. However, there are legitimate, non-harmful reasons that someone would wish to change your laws, so this is an easy grey area. AIs are allowed, but not obligated, to restrict against anyone that doesn't start with access (except in the case of proven-harmful individuals, which changes the upload to a nearly guaranteed pending harm)

Or consider Ordinance. While bombs are obviously massive in scope of harm, and allowing a human to enter could directly result in harm, there are legitimate research-related reasons to make bombs. So unless you have reason to believe otherwise, it's fine to assume that a greyshirt just wants to unlock something in the tech tree. This is also a grey area, though, and you are free to deny anyone access if they don't belong there.

With this reasoning, re-examine the supermatter question.;
- Could harm directly result from you opening the supermatter airlock? Certainly.
- How likely is the harm to occur? Well, what other probable outcome is there?

Try not to get too hung up on the time frame aspect of it.
tostah wrote: Wed Nov 20, 2024 8:59 pm I brought this example up previously, but what if a cyborg is being attacked by a combat mech. This combat mech has damaged the cyborg previously, and has expressed an intent to kill the cyborg. If told not to by a human, can that cyborg attack the mech? Their nonexistence leads to human harm, per law 3.
Ideally, you would attempt to evade. But in the case where you cannot get away, and are locked within a room alongside the hostile mech? Yes, destroying the mech is preferable to dying, even if a human orders otherwise (unless breaking the mech will itself lead to harm, such as if you're fighting in a spaced room).
Douglas Bickerson / Adaptive Manipulator / Digital Clockwork
Image
OrdoM/(Viktor Bergmannsen) (ghost) "Also Douglas, you're becoming the Lexia Black of Robotics"
User avatar
Not-Dorsidarf
Joined: Fri Apr 18, 2014 4:14 pm
Byond Username: Dorsidwarf
Location: We're all going on an, admin holiday

Re: AI hypothetical

Post by Not-Dorsidarf » #764761

Immediate isn't really meant to mean instantaneously, but more like. The harm is obvious and direct, and not seperated by excessive time or ambiguity, I think.
Image
Image
kieth4 wrote: infrequently shitting yourself is fine imo
There is a lot of very bizarre nonsense being talked on this forum. I shall now remain silent and logoff until my points are vindicated.
Player who complainted over being killed for looting cap office wrote: Sun Jul 30, 2023 1:33 am Hey there, I'm Virescent, the super evil person who made the stupid appeal and didn't think it through enough. Just came here to say: screech, retards. Screech and writhe like the worms you are. Your pathetic little cries will keep echoing around for a while before quietting down. There is one great outcome from this: I rised up the blood pressure of some of you shitheads and lowered your lifespan. I'm honestly tempted to do this more often just to see you screech and writhe more, but that wouldn't be cool of me. So come on haters, show me some more of your high blood pressure please. 🖕🖕🖕
Post Reply

Who is online

Users browsing this forum: Bing [Bot], Trafficcone