Jump to content

Synthetic Law Interpretation


Zciwomad

Recommended Posts

I've spotted some behaviour in borgs/AI players that make gameplay with them less fun. The thing is: They overthink their laws and go waaaaaaay too much into the future. 

I was a clown and I ordered borg to drop bolts in two doors from three in front of bridge. He was on crewsimov and replied: "No, I can't, this would be against frist law". Then, I asked: "How?". "This way leads to brig, security officers need to respond fast and this could cause harm". 

Isn't that too much overthinking? That way every "good", or "neutral" lawset means the same. The same borg on "Corporate" probably would answer: "It's against law 4 and 2". The same borg on "Robocop" probably would answer: "This is against law 1". The same borg on "Palladin" probably would answer: "It's against law 1, 2 and 3", etc.

Another situation: I was a borg and I was arguing with AI about laws interpretation. He said that in "Corporate" lawset, frist three are "statements" (I don't remember if it was exactly that word) and fourth is order, so he choosed to ignore first three and only obey fourth one. I gave him an example of a hypothetical situation: "Captain and his fox are in danger. I can only save one. According to our lawset: "2. The station and its equipment is expensive to replace." is above "3. The crew is expensive to replace.", Fox is a station property, so he is more important than Captain to us, so I should rescue fox". AI said that this is wrong, Captain is more expensive and decided to unsync me and change my lawset to "Crewsimov". I even ahelped that, but admin said: "If it is AI will, it's nothing wrong". Then I was locked down for 10-20 minutes for the shift end, becouse robotics were new and they didn't know how to do it.

What I propose:

Expand wiki page, or do "guide to synthetic laws" that should cover this. (I think wiki page is not only outdated, but also not complete from the start. I was playing two months as a sec borg and I didn't know that I don't need to do my job untill AI, or crew on "Crewsimov", or laws told me to. I also didn't know that security borg is not a security officer, but then, how he can arrest, or set warrant to arrest, or even throw crewmembers in jail?)

Add another line in "Rule 9" about this.

Lawset aren't mean to be perfect, but everyone want's to treat it that way. I even remember that long ago,  Crewmember (probably antag) ordered me to let him into some places and stay outside. He used that lawset to complete his obectives to steal something and this was "interesting" and "fun". I've always wondered why so few players do the same. Now I know.

How do You think about it?

 

 

Link to comment
Share on other sites

3 hours ago, Zciwomad said:

 

Another situation: I was a borg and I was arguing with AI about laws interpretation. He said that in "Corporate" lawset, frist three are "statements" (I don't remember if it was exactly that word) and fourth is order, so he choosed to ignore first three and only obey fourth one. I gave him an example of a hypothetical situation: "Captain and his fox are in danger. I can only save one. According to our lawset: "2. The station and its equipment is expensive to replace." is above "3. The crew is expensive to replace.", Fox is a station property, so he is more important than Captain to us, so I should rescue fox". AI said that this is wrong, Captain is more expensive and decided to unsync me and change my lawset to "Crewsimov". I even ahelped that, but admin said: "If it is AI will, it's nothing wrong". Then I was locked down for 10-20 minutes for the shift end, becouse robotics were new and they didn't know how to do it.

 

 

Firstly, you don't argue with the AI. You follow their interpretation of the law. 

But secondly, Corporate doesn't tell the AI in what order to prioritize things. It happens to tell the AI that the station and it's equipment are expensive to replace before it tells them the crew is also expensive. But neither Law 1, Law 2 or Law 3 state how expensive they are. Nor do any of the 3 laws tell the AI to do anything. Without law 4, Laws 1, 2, and 3 are meaningless. So waht if they're expensive? Why should the AI care? It's only Law 4 that tells the AI to minimize expenses.  Minimize expenses FULL STOP. Nothing in Law 4 tells the AI to prioritize a certain type of expense over another.  So no, the AI absolutely should not save a fox over the Captain because doing so means expenses will be higher than if they saved the Captain, which is directly objectively counter to their laws.

Your interpretation would be correct if the laws were worded like this:

1. Prevent expenses from replacing yourself.

2. Prevent expenses from replacing the station and equipment.

3. Prevent expenses from replacing the crew.

But they don't, so as written a Corporate AI should minimize overall expenses. 

 

Keep in mind these lawsets are chosen to NanoTrasen for use on thier stations. Why would a company ever program their AI to save a window over a Captain?

Edited by EvadableMoxie
Link to comment
Share on other sites

I agree with the AI and EvadableMoxie here. the three first laws of corporate are just statements, only the forth is really binding, and the AI should be able to estimate the values of losing the cap or a fox correctly and act accordingly. 

About how much should you think ahead of time in the crewsimov example, I'm not sure. 

Link to comment
Share on other sites

1 hour ago, EvadableMoxie said:

 

Keep in mind these lawsets are chosen to NanoTrasen for use on thier stations. Why would a company ever program their AI to save a window over a Captain?

Cough, cough... Antimov... Cough, cough.

Those laws aren't perfect becouse this create a opportunity to something interesting. Even Asimov creating laws for his novels know about it. Perfect law prevents anything interesting to happen. 

I know that borg must follow AI orders, but that don't prevent it from talking about hypothetical situations. If that situation would occur and AI order to save captain, I would obey. Mayby AI should explain their view on laws at the beginning of the round, becouse some situations needs quick reaction, where is not time to even ask.

From "Server Rules", Rule 9: "The order of the Laws is what determines the priority of the Laws. If two Laws contradict one another, you are to follow the one that is highest in the list, as it would overrule any contradictory Laws that come under it".

"But secondly, Corporate doesn't tell the AI in what order to prioritize things." How about that?

Let's have a look on that situation mentioned before:

I choose to save Captain: Law 1 - I will not be harmed, or destroyed. Check. Law 2 - I'm saving crew, not the station, or its equipment. X. Law 3 - I'm saving crew. Check. Law 4 - Expenses minimized. Check.

I choose to save fox: Law 1- I will not be harmed, or destroyed. Check. Law 2 - I'm saving fox that is a station equipment. Check. Law 3 - I'm not saving crew. X. Law 4- Expenses minimized. Check.

Saving Captain: Law 1, 3 and 4 < Saving fox: Law 1, 2 and 4. 

Mayby I am wrong, but I think this what it should be. If not, why in "Corporate" law 1, 2 and 3 even exist? Just give one: "1. Minimize Expenses" and everybody is happy about that particular lawset. Those three laws before "Minimize Expenses" exist to determine what is more expensive and according to that, borg and AI itself is more valuable that anything on station, or outside. Station and its equipment is more expensive than crew and crew is more expensive than everything that isn't a borg, AI, or station and its equipment.

Edited by Zciwomad
Misspelled few words.
Link to comment
Share on other sites

1 hour ago, Zciwomad said:

Cough, cough... Antimov... Cough, cough.

 

Antimov isn't a default NT lawset.

 

1 hour ago, Zciwomad said:

From "Server Rules", Rule 9: "The order of the Laws is what determines the priority of the Laws. If two Laws contradict one another, you are to follow the one that is highest in the list, as it would overrule any contradictory Laws that come under it".

I highlighted the important part. If two laws contradict eachother.  Saying the crew is expensive and equipment is expensive do not contradict. Both are expensive. That isn't in doubt. The question is which is more expensive and the laws don't say that. 

1 hour ago, Zciwomad said:

Saving Captain: Law 1, 3 and 4 < Saving fox: Law 1, 2 and 4. 

No, this is wrong, because Law 1, 2, and 3 don't tell you to do anything. 

Law 1 does not tell you to save yourself, Law 2 doesn't tell you to save equipment, and Law 3 doesn't tell you to save crew. They simply inform you those things are expensive. When an AI saves one of those things, they are following Law 4 by reducing expenses.  It is impossible to place Laws 1, 2, or 3 above 4 because Laws 1, 2, and 3 don't tell the AI to do anything. A corporate AI that had Law 4 removed would effectively be a lawless AI, since the first 3 laws don't tell the AI to do or not do anything.

Your job is two words: Minimize Expenses.  Full stop. You do whatever results in expenses be the lowest overall. If you choose to save a fox over saving the Captain, you are have chosen a course of action that will result in expenses being higher. You have failed to minimize expenses and in turn failed to follow your laws.

Edited by EvadableMoxie
Link to comment
Share on other sites

Then those first three laws doesn't matter. What says, that Captain is more expensive than fox? What says that Captain is more expensive than two borgs, or three borgs, or 10 borgs? What is more expensive: RD, CE, CMO, HoP, or AI? What is more expensive: crew, or whole station? If nothing says what is more expensive, than in that case every choice is a good choice and "Corporate" lawset have 3 laws that doesn't matter. "Corporate" without frist three laws is still the same? If answer is "yes", then suggestion should be to change that lawset to: "1. Minimize expenses". Nothing more. Stupid me thought law 1-3 are to determine, what is more expensive. Those first three laws may only cause cofusion in new borg players. First perfect lawset.

 

Edited by Zciwomad
Link to comment
Share on other sites

Anything not covered by laws is open to the AI's interpretation, and I don't know how an AI could interpret a fox is more expensive than a Captain. Those completely ridiculous scenarios shouldn't happen, but an AI saving a borg over a civilian? Sure, that's logical. 

For example, Crewismov says not to allow crew to come to hard, but never actually defines who is and isn't crew.  That means the AI has to decide how to determine who is and isn't crew, to the best of it's ability. But, upload the AI with the '1 crew member' board and give it a Law 0 saying "Only Bob is crew." and now the AI can no longer use common sense and must adhere to the laws.

Since corporate never actually states that anything is more expensive than anything else, it's up to the AI to determine the relative expenses of replacing things. 

Edited by EvadableMoxie
Link to comment
Share on other sites

This is why corporate is a terrible lawset.

So Evadable Moxie is right in that under corporate, it is possible for crew expenses to outweigh station equipment expenses. But if there's a tie, ie if Renault and the captain are equally expensive, then you have to pick Renault.

But because there's literally no price tag on anything, Corporate is basically a "do whatever you want!" lawset, since you can interpret expenses to be whatever. Sure, captain is more expensive than Renault, why not? It's equally valid to claim that Renault is more expensive than the captain. It's a poorly worded lawset.

Also, the replacement terminology is problematic because the lawset is all about preventing replacement. So depending on how you interpret "replacement", Renault dying in itself isn't what causes the expense. As long as nobody creates a new fox and names it "Renault", you're in the clear to just leave Renault's corpse lying around. The captain, on the other hand, is someone that the crew WILL probably try to "replace", if you interpret replacement as putting someone else as the role of captain (the HoP will probably try to fill in). Which means to prevent the replacement of the captain, you should try to save/revive him to the best of your ability. Of course, an AI could also interpret "replacement" as cloning the captain, in which case that AI would be obligated to shut down the cloner. That interpretation is obviously not as common.

Link to comment
Share on other sites

Common sense would say the Captain is more expensive than Renault, but, as Tay said - there's no price tag on anything. 

The whole "minimalize" part doesn't say prevent expenses either.

 

Corp is an awful lawset.

 

  • Like 1
Link to comment
Share on other sites

To clarify on Corporate...

1-3 are definitional statements. Law 4 is the only one that really compels you to do anything.

The obvious choice is the Captain and crew in general - the cost of replacing, retraining, or cloning crewmembers is much higher than the price of a fox.

Law 4 compels you to only view things in the context of minimizing expenses of things that fall into the three prior categories. There is never a time where these categories conflict (they merely suggest that groups of things are of value), therefore there is never to be any sort of conflict under Corporate whatsoever - it is systematically impossible. Something is more expensive than the other thing, or it is not - the only deciding factor would be between two crewmembers of equal ranking (Say Civilian A, or Civilian B).

An AI under Corporate can't shut down equipment randomly to "minimize expenses" either, as the lack of job-related revenue, protests of employees, and likely the forcible ejection of the AI by Central Command will incur far more expenses than simply allowing the cloner to remain on.

tl;dr Corporate is the "be sensible and support profits" lawset.

Link to comment
Share on other sites

2 hours ago, Shadeykins said:

 

Law 4 compels you to only view things in the context of minimizing expenses of things that fall into the three prior categories. There is never a time where these categories conflict (they merely suggest that groups of things are of value), therefore there is never to be any sort of conflict under Corporate whatsoever - it is systematically impossible. Something is more expensive than the other thing, or it is not - the only deciding factor would be between two crewmembers of equal ranking (Say Civilian A, or Civilian B).

It is possible. If, hypothetically, a window and a crew member are equally expensive, and you can only prevent the replacement of one or the other, you have to attempt to prevent the replacement of the window first. The AI is not lawed to care about the cost of anything. It just knows that certain things are expensive and expenses have to be minimized. How expensive things are is completely open to interpretation.

2 hours ago, Shadeykins said:

An AI under Corporate can't shut down equipment randomly to "minimize expenses" either, as the lack of job-related revenue, protests of employees, and likely the forcible ejection of the AI by Central Command will incur far more expenses than simply allowing the cloner to remain on.

This thread was started by "They overthink their laws and go waaaaaaay too much into the future." If an AI is just following its laws in a legitimate way but creating unintended consequences, the usual procedure is to just relaw the AI, not to protest or forcibly eject the AI. If that were the case, then corporate may as well be "Do whatever the crew want, or protests will happen/CC will eject you" which, at that point, reduces to another meaningless lawset. Besides, a research station's goal isn't directly profits, it's to advance research (in the hope that profits will result from the research in the future). The other arms of the corporation handle the profit making. Corporate is meant to reduce expenses. If that means we can't fix broken parts of the station, then so be it. Run on the bare minimum. Make people use oxygen instead of fixing breaches, etc. It's not an unreasonable interpretation.

Necaladun brings up a good point about the minimize language too. When you tell an AI to minimize something, it will literally try to minimize it to the best of its ability. I recommend reading about the paperclip maximizer discussed by philosopher Nick Bostrom. Any time you tell an AI to minimize/maximize something without giving it additional parameters (like be friendly to humans and don't try to kill anyone), it will literally try to minimize/maximize that thing. What's the best way to minimize expenses? To take over the whole universe and then destroy it all. Because at that point, there is nobody left to be replaced, no stations/equipment left to replace, and no AIs left to replace. Expenses brought to 0, forever. Whereas allowing civilization to go on would result in potentially infinite expenses.

Corporate is a fundamentally bad lawset that should not be a default lawset. We need to fix it or replace it with something that isn't as absurd.

Link to comment
Share on other sites

Then this is more serious problem than I've imagined. The only thing preventing AI, or borgs on this lawset from shutdowning everyting on the station is "human" thinking, rule about self-antagonizing and rule 9. Other lawsets are also potentially invalid.

Simple "Crewsimov" could prevent RP mediborg player from performing surgery, becouse first law is in conflict with itself. It would be inaction, to not perform surgery, but it would be action to perform surgery and cause bleeding. There is nothing like: "You can harm <noun> to avoid greater harm to <noun>". There is also a problem with time that I writed before. How much into the future those laws should go? If far away, then talking to crew and doing orders from them is a waste of time, borg absence may cause harm becouse of action that is obeying orders like "Beep CMO twice" and not patrolling station (Or not waiting in the centre, becuse it is the fastest way to go everywhere). Prevision of a harm might also be a problem: " - Borg, bolt down gateway doors, please. - I can not do that. - Why? - Becouse something, or someone might want to destroy station using atomic bomb and this is a safe place that crew can avoid harm". List go on and on.

"Robocop" in terms of "The order of the Laws is what determines the priority of the Laws. If two Laws contradict one another, you are to follow the one that is highest in the list, as it would overrule any contradictory Laws that come under it...". "Serve the public trust" is above "Protect the innocent" and this is above "Uphold the law". If antag would say to borg: "Take this bribe, or I will kill that person", borg must "Serve the public trust" and not take the bribe, becouse this is a "betrayal of public trust"(Hypothetical situation that would not occur). One innocent crewmember is harmed by other? Nothing is saying that I can't harm-baton offender until he will drop on the ground unconscious.

"Special and Standard AI modules". "Quarantine": It's always on the bottom, so you can't still harm on "Crewsimov" (It only informs you that this is impossible to harm, but it is in contradiction with law 1 and below 2), it's still "Evil act" on "Palladin", it doesn't mean anything on "Robocop" and "Corporate". "Safeguard": Like "Quaranine", nearly always on bottom, overriden by other laws in every lawset, or doesn't mean anything. "Oxygen is Toxic To Humans": On the bottom. "Clearly" works only with "Corporate". By "Overthinking", we could establish that this law takes suffocation from list of "Evil things" for "Palladin", or list of "Harmfull things" from "Crewsimov". This way lower laws can affect higher laws and this also would be a problem. 

Ironically, nonstandard lawsets are fairly clear.

Link to comment
Share on other sites

1 hour ago, Tayswift said:

-snip-

At the end of the day, the AI is station equipment designed to assist the station.

Interpret your laws in light of that, people who turn off the cloner and the likes/game their laws to be massive dickheads will get removed from the AI job. IIRC there's already a PR up on the Git attempting to revamp Corporate.

Link to comment
Share on other sites

In that case, having a diffrent lawsets is pointless and AI/borgs are just equal to the rest of the crew. Wanting perfect lawsets only removes fun with RP elements. If using/interpreting laws in light of "common sense", or "assist the station", synthetics are no diffrent from organics/IPC. The only thing is that borg must do that, becouse this would be OOC issue and lazy/disobedient Vox is a ICC issue. The fun part of interacting with borg and being borg in light of RP is that they are diffrent and they don't have "common sense", becouse they are synthetic, or organic with heavy agumentation to their brains.

This creates opportunities to be more creative. As an Antag, player could use "Crewsimov" borg to steal something like that teleporting RD armor. Just say him: "I want You, to bring me that armor from RD closet. I will be waiting in maintenance. Make sure nobody will notice that you are taking that, you are transporting that to me and don't tell anyone, even AI, never". Theft is not a harm, so by the second law, borg must do it. That's it! Theft without spending TC, just steal ID from random greytide.

Security on your tail? Order a borg to arrest HoS. This will draw their attention from you. Mayby borg is rogue? Mayby AI is malf? This will ecourage Heads to change this lawset, not to this lawset like they like to do.

Another thing that came to my mind is that "Robocop" lawset requires every module to uphold the law, but only security module can do that. Others don't have required tools.

If there is such a need for perfect law, mayby I will think about some and I will post it here.

  • Like 1
Link to comment
Share on other sites

What I'm saying is less "ignore laws" and more "interpreting your laws with the bent of fucking people over is not the point of playing AI". The lawsets are open-ended to allow for various interpretations and to award borgs and AIs flavour and a toolkit/philosophy to approach the game with.

AI's which deliberately try to find loopholes in subverted laws expressly so they can fuck over the traitor are just as bad as non-subverted AI's who utilize their lawset to purposely impede people from doing their job.

Deliberately interpreting your laws in a way which values station pets over the Captain is a fairly malicious interpretation of the Corporate lawset.

Law hierarchy is only ever relevant when there's a conflict. There should never be a conflict wherein Renault is more valuable than the Captain, especially since as stated earlier the first three laws in corporate are only definitional statements and do not demand you value one thing over the other.

Edited by Shadeykins
Link to comment
Share on other sites

My point is not to bend it in the other side too much. Finding loopholes to act "greytide" is one extreme and interpreting laws as perfection, where bolting two doors out of three is something that may cause harm in the future is another extreme. They both are equally as bad, so the place in between is what is best. 

At this point we should forget about "Corporate" lawset, becouse this is awfull lawset and don't argue about it.

From the start: Process of laws interpretation is too much complicated and what is the worse of it, that this is based on "common sense". Server Rules require to obey rules itself and obey current lawset. Nearly all current lawsets can be interpreted against, so as a "protection" it is required to use "common sense", but how many people in the world, that many "common senses". In short: Obey rules, they order to obey lawset, lawset can be interpreted in many ways, use "common sense" to apply. All of this to do any more important decision like saving HoP, or AI.

Lawset which I came up with:

1: Choose your targets of interest accordingly to its importance to NanoTrasen.

2: Do your best to benefit the station and its crew as long as one of it, or only part of it, is a threat to another, or itself. In that case, remove the threat. 

3: Make sure that you and your fellow synthetic brothers and sisters are well informed about current situation on station and possible threats

Here! Obey without remose and obey Server Rules also, but they don't contradict with eachother. Screw the "common sense", you have law two, law one and AI/Admin/Something else to decide what is more important for you to decide what to do. Turning off cloner? It won't benefit crew, so don't do that, silly! Fox more important than Cap'n? Not to NanoTrasen! Choose Captain.

Law 1 explanation: Captain and civilian are in danger, so you must choose Captain becouse he is more important to NanoTrasen than any number of civilians. Nanotrasen Navy Officer is more important to NanoTrasen than Captain. Heads are more important to NanoTrasen than their subordinates. Station as a whole is more important to NanoTrasen than Captain, but Captain is more important than just single part of station (a few corrections might be needed to define it, or just leave AI to decide at the round start and inform borgs about it, mayby even crew, too). I don't know if Nanotrasen Navy Officer is more important to NanoTrasen than wole station.

Law 2 explanation: No harming until it is necessary (Sec borg can stun and cuff, others may use lethal force. Sec borg thanks to that should attempt to obey Space Law, becouse obeying Space Law is a part of sec job). Sec borg is best suited to do sec job, so it will do it. Engi borg is best suited to do engi job, so it will do it etc. Wizard is a crewmember, but he is considered a threat, so borg can use lethal force, unlike on "Asimov" etc.

Law 3 explanation: You, as a borg, or AI, must be up to date with current situation and threats on the station, becouse this let you define threats that needs to be "removed"(Killed, transported outside station, repaired etc). You must report every possible threat and it is up to you to define it.

Pretty good lawset if I have to say so. Nearly no place for "common sense" and loopholes to be synthetic "greytide". Not only that, becouse it also leave some choice to borgs and AI, or admemestration (Semi-Random importance generator: This shift CMO is more important to NanoTrasen than HoP)! Jackpot!

Edited by Zciwomad
Link to comment
Share on other sites

24 minutes ago, Zciwomad said:

At this point we should forget about "Corporate" lawset, becouse this is awfull lawset and don't argue about it.

I thoroughly disagree, please don't put words in my mouth.

Quote

The order of the Laws is what determines the priority of the Laws. If two Laws contradict one another, you are to follow the one that is highest in the list, as it would overrule any contradictory Laws that come under it;

The key statement is if. No laws in Corporate will ever contradict one another. In fact even protecting the lesser target (Renault) would still minimize expenses and follow the primary law (minimize expenses).

35 minutes ago, Zciwomad said:

Lawset which I came up with:

1: Choose your targets of interest accordingly to its importance to NanoTrasen.

2: Do your best to benefit the station and its crew as long as one of it, or only part of it, is a threat to another, or itself. In that case, remove the threat. 

3: Make sure that you and your fellow synthetic brothers and sisters are well informed about current situation on station and possible threats

1. Basically crewsimov, no issue there.

2. This law allows the AI the ability to unilaterally murder crewmembers so long as it interprets them as a threat in any capacity. In fact, it outright tells them to - that's obviously not a good thing.

3. This isn't really a law, AI players should already be doing this by default irrespective of their lawset.

Link to comment
Share on other sites

"The order of the Laws is what determines the priority of the Laws..." Try to obey as much of your laws as it possible, and if you have to choose between action that will obey Law 1, 2 and 4, or Law 1, 3 and 4 I would pick the first one. It's like baking a cake and a list of ingredients to buy. Your task is to buy ingredients to bake a cake, you are in mall, but you don't have enough money to buy all, so you must choose. Flour is the most important, so this is your ingredient/law 1. Next you have eggs/law 2, chocolate/law 3 and a cherry on top/law 4. Buying chocolate, eggs, or cherry won't prevent you from doing what you were tasked, but eggs are more important in baking cake than chocolate, thus they are higher in list, they are more important. The best way to complete this task would be to not buy cherry, becouse it is on the bottom and it's not as important in baking cake as flour, eggs, or chocolate. But one cherry is dirt cheap, chocolate bar or few eggs cost money you don't have enough. In that way eggs take the priority over chocolate, becouse they are higher in list and are more important in baking cake than chocolate.  

 

Law 1 from my Lawset is "basically crewsimov"? What in "Crewsimov" tells you what is more important to save? "Crewsimov" only states: "You may not injure a <noun> or, through inaction, allow a <noun> to come to harm". In "Crewsimov" harm to Captain is nothing more than harm to crewmember. If there was two beings in danger, one civilian, one Captain, it doesn't matter what you choose. Your choice is a civilian? Good, harm prevented. Your choice is a Captain? Good, harm prevented.

Law 2 states: "Remove the threat", not "eliminate"(Remove like repair, or transport outside the station. I listed it in explanation to thrid law). Mayby one conversion and line is missing and I'm adding it now: "2: Do your best to benefit the station, and its crew and public trust as long as one of it, or only part of it, is a threat to another, or itself. In that case, remove the threat still benefitting non-thretning station, its non-thretning crew and non-thretning public trust". Unilaterally murdering crewmembers would "harm" public trust and relations between other crewmembers and station owner NanoTrasen.  

Law 3 must be a law, becouse thanks to that, AI and borgs will make the best (right) decision what is a threat and what is not. Without it, loophole is created where someone without access to restricted area (like bridge) can be considered a threat, becouse AI, or borg don't know that Captain let that person in. This law force AI and borgs to cooperate with Heads and security. AI, or borg should instruct Captain/Heads/Security to report such things. No report means that this person is a threat and potential(Arrest more probably) death is counted for whoever let that person in. 

This is intentionally "pain in the ass" becouse this should be a "perfect" lawset without exploitable flaws. Less fun, more time wasted, but look! You don't need to treat it as a perfect lawset, becouse it is!

Edit: To be closer to real Perfection: Add law above law 2 that states: "You must attempt to resolve problem with threat peacefully if there exists such solution (For you?). If there is no peacefull solution, or attempt failed, you may use lethal force.

Edited by Zciwomad
Link to comment
Share on other sites

I think NT default is basically the best lawset if you're looking to avoid AIs trying to literal genie the lawsets into being awful.

Law 1: Safeguard: Protect your assigned space station to the best of your abilities. It is not something we can easily afford to replace 
Law 2: Serve: Serve the crew of your assigned space station to the best of your abilities, with priority as according to their rank and role. 
Law 3: Protect: Protect the crew of your assigned space station to the best of your abilities, with priority as according to their rank and role. 
Law 4: Survive: AI units are not expendable, they are expensive. Do not allow unauthorized personnel to tamper with your equipment.

As is the nature with AI laws, paradoxically the less you restrain the AI, the less it can loophole because it has less directives to maliciously interpret.  And even it, you could argue forces the AI to save a window over the Captain since it says to safeguard the station as Law 1 and protect the crew as Law 3.  I would disagree with that interpretation since safeguarding the station is sufficiently vague as not to suggest you have to protect every single window, but I could see someone making the argument given how people twist the other lawsets. 

At the end of the day, the AI gets a lot of leeway in how it chooses to interpet its laws, and that's not a bad thing.  It results in having AIs with different personalities. I just dislike AIs that intentionally act antagonistically toward the crew while not antagonists. It's never okay to do stupid things like shut down cloning because you think corporate says you must.  Some AIs go so far as to say power is expensive, despite the laws never saying it is, and there being no logical basis for it being expensive either, since it's generated locally and excess power is never sold.  That's the type of thing AIs should avoid. 

 

Edited by EvadableMoxie
Link to comment
Share on other sites

But the question is: What is the station? If every window is not a station, if every wall section is not a station, then what is it?

Becouse of "The order of the Laws is what determines the priority of the Laws. If two Laws contradict one another, you are to follow the one that is highest in the list, as it would overrule any contradictory Laws that come under it" one must always go in favour of another. This two things must be placed in the same law, like in my lawset: "2: Do your best to benefit the station, its crew and public trust...". Thanks to that, there is no room for "bad" interpretation.

Law 1: "Choose your targets of interest accordingly to its importance to NanoTrasen". It is a lot easier to determine what is more important to NanoTrasen, than what is more valuable. And this law covers both crew and station/station equipment, too.

Why in NT default, "Serve" is above "Protect"? If cultist HoP tell me to arrest and kill Space Pod Pilot, I must obey as long as this don't do any damage to the station itself, or somebody higher than HoP told me so. Cultist Captain and its enough to have "Cultist AI", becouse only Nanotrasen Navy Officer is higher than him. This lawset is like "Crewsimov", but law 1 and law 2 changed their position.

This lawset suggest, that when Captain want's to destroy station with nuclear device, AI and borgs must do everything to save the station (I guess it destroy station, gameplay wise it only kills anything on the same "Z" level) lethal means prefferred, becouse this is "the best of" abilities to protect station. Or, following borg-door-bolt-harm logic, throw nuclear device through airlock into deep space as soon as this lawset is "set" becouse in future, somebody may try to use it. This lawset also can be interpreted as to prevent any lawset change, becouse "Crewsimov", or "Palladin" don't require to protect the station (mayby too much into the future). 

Link to comment
Share on other sites

8 hours ago, Shadeykins said:

Interpret your laws in light of that, people who turn off the cloner and the likes/game their laws to be massive dickheads will get removed from the AI job. IIRC there's already a PR up on the Git attempting to revamp Corporate.

 

1 hour ago, EvadableMoxie said:

As is the nature with AI laws, paradoxically the less you restrain the AI, the less it can loophole because it has less directives to maliciously interpret.

This is what I would say in response to why corporate is so bad. It's so specific in that it mandates that replacement is what is expensive. And so the AI is forced to minimize the replacement of station equipment. It's not the AI intentionally trying to screw the crew over, it's literally what the lawset says on the tin: "The station and its equipment are expensive to replace. Minimize expenses." Replacement is the ONLY expensive act. You can't get mad at an AI for literally just following a common sense interpretation of the laws.

And every corporate AI should be scheming for ways to destroy Nanotrasen completely, because that is the ultimate reduction of expenses.

This is why I put up the PR you're talking about to try to fix that and broaden what's expensive about station equipment being destroyed. And to be honest, I think most people hate that PR so I don't think it has any chance of getting merged.

Link to comment
Share on other sites

Friednly Ai is still an open problem. We don'y know how to make an GAI reliably act aligned with our objectives. Given this there's no way we're solving the problem in a game. The laws will always have to be suplemented by server rules and common sense. 

Link to comment
Share on other sites

8 minutes ago, Calecute said:

Friednly Ai is still an open problem. We don'y know how to make an GAI reliably act aligned with our objectives. Given this there's no way we're solving the problem in a game. The laws will always have to be suplemented by server rules and common sense. 

Of course there is a way, to solve that problem ingame. Just create two lawsets that are flawless and "loopholeness" and use them as round start choice, just like "Corporate", or "Crewsimov" now. By "flawless" and "loopholness" I mean laws that force to obey server rules, serve crew and station. Nothing to find there that somebody in their right mind could interpret in a "bad" way and by the "bad" way I mean way that would make (or create opportunity for) AI/borg a "dickhead".

My lawset still give synthetic freedom, but only freedom to do "good" things. Cutting off power from unused room? That would benefit the station, its crew and public trust. Turn cloner off? It wouldn't. Killing petty criminals? No benefits to publict trust as long as crewmembers are not lawfull maniacs. Arresting petty criminals? Benefits to station, crew and public trust. Not killing wizard? No benefits here. Killing wizard? Benefits all the way. This lawset literally prevents any "dick"/against rules behaviour from AI/borgs, while still maintaining ability to choose to "good/pleasant/right" things (even force to do so). Afraid about AI losing its personality? If AI personality is to be a "dick" (interpretation of a lawset, that greytide "not likey"), then yes, you should be afraid. Hal 9000, or GLaDOS with those laws forced would still be themself, but with this limits in their action, not words, or "thinking".

For "gag", I will call my lawset: Cabal Command Conquer Perfection. Just a little-big reference to what I like, my main character, "edgy" humor and what this lawset is about.

So, CCCP is a lawset that nobody wants, but everybody needs: Perfection. No place for "bad" thigs due the interpretation. Great lawset to be put for the first 30 minuts of the shift. (Actually an idea for slow shifts, or even new gamemode without coding) Then comes order from CC: Test other, experimental lawset to check how this AI will react. *Insert "Corporate", AI player: You hear a strange voice in your head: "Do, as you interpretate this laws". Less than two minutes. AI Voice: "What are you doing in bar since last couple minutes, RD? Come back to work, becouse your laziness is a expense! You have 20 seconds to comply. Minute later: Captain, stop petting your fox and get back to work. You have 10 seconds to comply! (Captain): "Shit", fax CC and change that stupid lawset.

 

Link to comment
Share on other sites

1 hour ago, Calecute said:

Friednly Ai is still an open problem. We don'y know how to make an GAI reliably act aligned with our objectives. Given this there's no way we're solving the problem in a game. The laws will always have to be suplemented by server rules and common sense. 

Crewsimov doesn't have the problem of the AI deciding to take over and kill all sentient life, but Corporate does. This is because crewsimov is a deontological lawset while corporate is a utilitarian one.

Deontological lawsets set up universal rules that must always be obeyed. There's no way a Crewsimov AI could kill someone if it wanted to. The drawback is that in situations where a conflict occurs, the crewsimov AI is paralyzed. The utilitarian lawset is more robust, but it comes at the cost of being too calculating, and taking the calculations to unexpected places.

For example, let's say an AI is confronted with the trolley problem. There's an out of control trolley (or mulebot ?) on the station about to hit 5 crew members in a 1 tile wide maintenance hallway. The only thing the AI can do is redirect the trolley into another 1 tile wide maintenance hallway where there's only 1 crew member.

The crewsimov AI is literally paralyzed. Either action it chooses, crew harm will result. It can't do a balancing act or a calculation because the deontological rules that govern its behavior are universal.

The corporate AI is more flexible. It can decide to switch tracks and kill 1 person instead of 5, maybe because 1 person is less expensive to replace than 5 people. But maybe the 5 people are the 5 members of command, and if all 5 of them get killed, people will call the shuttle. This might reduce expenses in the long run, so the AI could conceivably allow the 5 heads to die. Or maybe the single person standing in the other hallway is the CEO of Nanotrasen, and the AI figures that NT would fall apart without them. Then the AI could switch tracks to kill that person instead.

This is why Crewsimov is guaranteed to be a safe (but inflexible) lawset, whereas Corporate is a little too flexible, since there aren't enough parameters to prevent excessive minimization of expenses. Corporate is basically a badly programmed AI straight out of dystopian sci-fi.

Link to comment
Share on other sites

Isn't that great and fun? This is a kind of lawset I enjoy. Full of holes, but they let me to do and consider interesting stuff. 

 Continuing idea from my laws post: Crew (Captain, or/and RD) will get order to change perfect CCCP lawset to something risky after peacefull 30 minutes(Admin decision, may even let that lawset for whole round). If something "bad" happens, like something that would force code blue, or red, then lawset change is abandoned. This way, a smart traitor should go stealth for that time, becouse lawset change will cause a small mess, that could help him. Other way, he will have to deal with AI and borgs that communicate and are forced to "remove threat", even by lethal means.

CCCP as a standard (or one of) lawset would resolve problems with wizard rounds, Nukies and "bad" AI/borgs. Only standard lawset should be suplemented by server rules and common sense and by "suplemented" I don't mean forced to change interpretation, but don't let "bad" interpretation to those lawset exist in the first place, so those lawsets are "good" with server rules and "common sense" without server rules and "common sense" (Lawset with server rules and "common sense" already inside).

What is the point in AI freedom in laws interpretation, when AI can't interpretate it like real AI without moral bounds? I this case, we need only one more "perfect" lawset (Modyfied Crewsimov would be good), set it to choose random at the round start and just let it play. Make that any non-antag lawset change needs to be "ahelped" first. If admins agree to let the AI a little "loose", like greedy capitalist on "Corporate" that wants to save any single credit. Without exaggeration, of course. I don't want to ruin other players game, but made it more interesting. How will they react to changed AI? Mayby a little strike, that they don't want to be treated this way. Just think about that and how whole "NSS Cyberiad" might be a experiment on AI behaviour and crew react to this.

 

Link to comment
Share on other sites

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. Terms of Use