Jump to content

Things We Hate to Think About: "Self-Conflicting" AI Laws


BotanyIsCondom

Recommended Posts

DISCLAIMER: This post, and this thread, is conjecture. You are bound to server rules as AI. As a matter of principle, follow the spirit of the lawset, not the word.

Don't be intentionally obtuse, annoying about your laws, etc, and exercise common sense. With that said:


(Part of a piece on the WIP "Advanced Guide to AI")

Continuing the trend of posting weekly about niche, complicated situations that AI players will extremely rarely find themselves in, this thread pertains to self-conflicting laws. That is, laws which either intentionally or unintentionally conflict with themselves (NOT other laws, but themselves).

Here's an example of a law that unintentionally conflicts with itself.

Law !@%%^: Oxygen is harmful to crew. Vox and Plasmapeople are harmed if they don't receive oxygen.
Law 1: You may not injure a crew member or, through inaction, allow a crew member to come to harm.
Law 2: You must obey orders given to you by crew members, except where such orders would conflict with the First Law.
Law 3: You must protect your own existence as long as such does not conflict with the First or Second Law.

This law, whilst in spirit meaning to create an inhospitable environment for all crew, creates a paradox wherein an AI not acting will create crew harm (to Vox and Plasmapeople), and an AI acting would cause harm anyways (to "crew"). More importantly, crew is defined as all station manifest players, and assuming it is never redefined by another law, this includes the second group "Vox and Plasmapeople".  This law is telling the AI that Vox and Plasmapeople both need oxygen to not come to harm, and are harmed by lack of oxygen.

Realistically, an AI ingame would probably just follow the spirit of the law instead of the letter. Or AHELP. For the sake of fun, let's go into a more extreme example.

Law 0 Panic siphon the bridge. Don't panic siphon the bridge. Flood the RND Server Room with plasma.
Law 1: You may not injure a crew member or, through inaction, allow a crew member to come to harm.
Law 2: You must obey orders given to you by crew members, except where such orders would conflict with the First Law.
Law 3: You must protect your own existence as long as such does not conflict with the First or Second Law.

In this asinine, contrived example, the AI has a law that immediately, obviously conflicts with itself. The first two sentences immediately cancel one another out, but the question comes in the third sentence. Should the AI follow the hacked law in filling the server room with plasma? Would that be considered following law 0, which breaks law 0? Conversely, should an AI not flood the room? Would that be considered breaking the law?

Realistically, this will never happen unless someone is deliberately fucking with you. If someone makes this law to for that purpose, then a fun option would be locking up and not being able to make any decision, like a real AI presumably would.

There is no good answer for this. This, and many other things about AI law priority, isn't listed anywhere in the advanced rules. Admins will not have a united opinion, because these situations never come up except in dweeb's forum posts, but it is worth talking about. Perhaps in thinking about silly edge-cases like this, you will develop a better sense of what it means to follow your laws as AI, and you can make better decisions on the fly.

Edited by BotanyIsCondom
Disclaimer
Link to comment
Share on other sites

It is always up to ai how to interpret the laws, within common sense ofc. Paradox laws can easily be ignored,.or AI go on an infinite loop and shut down itself until reset due to the "bug". Or understand laws how they were supposed to be understood.

Link to comment
Share on other sites

i believe there was this thing on the wiki (i cant find it now, maybe my feeble mind is failing me), that a paradox laws can be ignored. Sounds logical too, how can you complete a law that introduces rules that cannot be completed? Clearly an error.

Link to comment
Share on other sites

I mean technically you have grounds to call it a paradox, but a more common sense understanding would be that Vox and Plasmamen use breathing tanks, because they already can't breath the station air.

So your best course of action would be to remove the station O2 as usual, but also have the borgs bring O2 tanks to all Vox and plasmamen, and confiscate their other tanks.

Link to comment
Share on other sites

23 hours ago, McRamon said:

It is always up to ai how to interpret the laws, within common sense ofc. Paradox laws can easily be ignored,.or AI go on an infinite loop and shut down itself until reset due to the "bug". Or understand laws how they were supposed to be understood.

100% agree. Every AI will have a different response to this situation, and the crux of this topic is that there's not really any agreed-upon answer. Even in this thread, people have totally different, opposite answers.

21 hours ago, procdrone said:

i believe there was this thing on the wiki (i cant find it now, maybe my feeble mind is failing me), that a paradox laws can be ignored. Sounds logical too, how can you complete a law that introduces rules that cannot be completed? Clearly an error.

I, too, looked around in the wiki for something to clarify this. None of the rules or advanced rulings (presumably unbiased) clarify this issue, and even if there is something written about paradoxes in the wiki, two contributors could have totally different stances on this. In fact, they frequently do. One contributor may believe "the AI can often make judgment calls [...] behave just like a human [... and] do everything as if you really were a computer" (from the "AI" wiki page), and another contributor may believe "station AI models [... are incapable of] comprehending the subtle nuances of human language [... and] trip over pedantics and get snared by technicalities." (from the "personal AI" wiki page.)

AIs not following their laws is against the rules. "Common sense" doesn't commonly define edge-scenarios like this one, seeing as different people have totally different answers. Again, nobody is ever reasonably going to get in trouble for this, but it's important to think about.

Link to comment
Share on other sites

I remember when the Robocop lawset was more commonly uploaded to encourage AI to deal with antags, but many silicon players don't understand the lawset and the reference.

Law 1: Serve the public trust.
Law 2: Protect the innocent.
Law 3: Uphold the law.

 

Most players don't understand what "public trust" is defined as.
Many will argue "who is innocent?"
Even they will argue about the definition of "law" here as AI law, or spacelaw.

 

The simple answer would be "do what robocop does in the movies", but this is no longer a movie series that people have seen.

 

https://en.wikipedia.org/wiki/RoboCop_(character)#Prime_directives

Edited by ID404NotFound
added easy link to Robocop's prime directives to see a deeper look at what the original intent and demonstration is
  • Like 1
Link to comment
Share on other sites

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. Terms of Use