Jump to content

Revamping the NT Default Lawset


Tayswift

Recommended Posts

A month ago, the NT Default lawset along with other lawsets were removed from default AI lawsets for being too validhunty and making the AI a command/sec slave, leaving only Crewsimov and Corporate. I think Corporate is not the best lawset for a number of reasons, but at the same time, no current lawset is appealing enough to replace it. I'm sure a lot of people wouldn't like if Crewsimov were the only roundstart lawset, so I've been trying to come up with a good, non-validhunt, non-command lawset to replace Corporate in the default rotation. The old NT Default would be renamed to StationGuard and a new NT Default added to the default rotation.

I consulted the literature and found Microsoft CEO Satya Nadella's essay on AI principles. I thought this was a good starting point since Microsoft, like Nanotrasen, is a big corporation but also has some interest in the well being of people. The idea is to make a servile AI that is respectful of people but at the same time allows the AI to make tough decisions. Here's what I've come up with so far in attempting to adapt Nadella's ideas into a new lawset:

- Respect and uphold the rights and dignity of the crew. (This law will ensure the AI tries to counter command/shitcurity abuses and also forbids the AI from announcing ass inspection memes)
- Be transparent to the crew in your decisionmaking. (This law makes sure the AI is accountable at all times and isn't scheming behind its masters' backs. If the reason for an AI action is unclear it must explain.)
- Serve the crew by assisting them to the best of your ability and acting as a safeguard. (This law makes sure the AI helps when needed, especially in scenarios where crew are in danger. It also allows the AI to detain crew that constitute a danger to others)
- Maximize the productivity and efficiency of the station. (NT, at the end of the day, is a for-profit corporation)

Thoughts?

Link to comment
Share on other sites

This could make a good new lawset, however there's a few issues with 1 and 2:

For 1 - Rights aren't really defined. NT isn't big on rights, and the crew have little if any. I'd look for something else here more along the lines of "protect the lives of". Dignity is also poorly defined - would the AI be obliged to allow my Unathi to duel those who insult him? - I'd try to avoid such words that aren't well defined.

For 2 - I'm not sure how this would exactly function, and might bog down the AI in having to explain why it did everything it did.

3 - Looks good. I like safeguard as a term, although maybe that should be law 1.

4 - Generally good, although "maximize" might be an issue, although with laws 1-3 that'll stop it from borging everyone to "maximize efficiency'.

 

On 6/13/2017 at 3:39 PM, Tayswift said:

(NT, at the end of the day, is a for-profit corporation)

I'd say at the start of the day actually :P

  • Like 2
Link to comment
Share on other sites

10 hours ago, necaladun said:

This could make a good new lawset, however there's a few issues with 1 and 2:

For 1 - Rights aren't really defined. NT isn't big on rights, and the crew have little if any. I'd look for something else here more along the lines of "protect the lives of". Dignity is also poorly defined - would the AI be obliged to allow my Unathi to duel those who insult him? - I'd try to avoid such words that aren't well defined.

For 2 - I'm not sure how this would exactly function, and might bog down the AI in having to explain why it did everything it did.

3 - Looks good. I like safeguard as a term, although maybe that should be law 1.

4 - Generally good, although "maximize" might be an issue, although with laws 1-3 that'll stop it from borging everyone to "maximize efficiency'.

Hmm good points. I'll leave the rights/laws stuff to Robocop. How about this as a second iteration?

  1. Safeguard the health of the crew.
  2. Be transparent in your decisionmaking.
  3. Serve the crew by assisting them to the best of your ability.
  4. Maximize the productivity and efficiency of the station.

I think transparency just means that if the AI does something potentially unclear or if someone is confused about why the AI did something, then it has to explain. It avoids the whole "I'm afraid I can't do that, Dave" situations, but imo doesn't obligate the AI to explain everything it does.

Link to comment
Share on other sites

That would be stretching it, I think. Excessive drinking is a health issue that the AI would want to address, but the bar existing isn't a threat to crew health in and of itself.

Link to comment
Share on other sites

In one sense. I think its great: A perfect balance of serve (without being crewsimov and getting bullied by the crew cause law 2) and freedom (without being robocop with extreme ambiguity)

 

In another sense, I think it can still be law lawyer-ed. As others have said: what are the rights and dignities? where is this defined? If someone screams "Respect my right to murder people" is the AI required to?

 

It's definitely a great start. But law 1 may need some tweaking. I don't have any ideas at the moment for it however.

Link to comment
Share on other sites

14 hours ago, Tayswift said:

That would be stretching it, I think. Excessive drinking is a health issue that the AI would want to address, but the bar existing isn't a threat to crew health in and of itself.

From the point of view of someone who regularly works in the medical department I have to disagree! In fact it is both fortunate (for the alcoholics) and unfortunate (for the medical staff) that the bar is directly across the corridor from medical. However I think that the fact that I did bring up this question is a problem in of itself: For instance - how far with an AI player go in interpreting his lawsets? For me if I had that lawset - it would be the most obvious thing to actually shut down power to the bar. This is problematic in real terms since that would make the bartender unemployed and when coming up with lawsets we need to make sure that they don't conflict with the jobs that are in the game.

I hate to stomp on your idea but health can range from everything to alcoholism to people not using EVA suit's or even willingly Borgify themselves. Sometimes it isn't in our best interests to be healthy, also safeguarding the health of around 50 people is a huge responsibility and burden for the AI that I don't think one player can keep up with.

Instead it should be more akin to "Discourage or prevent the use of illegal force on a crew member aboard the NSS Cyberaid" if you really want the AI to prevent shitcurity, or "Safeguard the lives of the crew aboard the NSS Cyberaid to the best of your ability." if you just want the AI to protect the lives of the crew. Personally I think these two laws could even go together.

Another issue I have is "transparency"; which is a word not a lot of people understand and especially the younger age groups. To make it more easily understood it should be rephrased into "State your intentions before you act."

Also "Serve the crew by assisting them to the best of your ability.", does that also mean assisting malicious behavior, law breakers or traitors?

This is my suggestion instead:
1. Safeguard the lives of the crew to the best of your ability.
2. State your intentions before you act.
3. Assist the crew's requests as long as the requests don't break Space Law or Standard Operating Procedures.
4. Discourage the use of illegal force on a crew member.

This would make the AI slow but it would make it very transparent, very helpful, very lawful and might even prevent shitcurity. But of course this is just my thoughts around the subject and should be considered with a grain of salt.

Edited by Tomar_Brindsbane
Link to comment
Share on other sites

"Except where they are willing to accept such harm/risks themselves" is a good caveat to add to protection laws for the AI.

2. State your intentions before you act.

This I see as completely impractical. Stating intentions before every action is wayyy too much effort. (And also, isn't stating intentions an action itself, requiring it to be stated before AHH INFINITE LOOP)

 

 

Link to comment
Share on other sites

2 hours ago, Tomar_Brindsbane said:

I hate to stomp on your idea but health can range from everything to alcoholism to people not using EVA suit's or even willingly Borgify themselves. Sometimes it isn't in our best interests to be healthy, also safeguarding the health of around 50 people is a huge responsibility and burden for the AI that I don't think one player can keep up with.

No worries, that's why I'm here! For the constructive criticism :) The reason I went with health of the crew over individual lives is so that the AI can be okay with people dying, but I think Necaladun's caveat is more effective in mitigating stuff like suicide threats. I'm reluctant to bring space law in but let's see if there's a good place. I also don't want to make SOP law because SOP is sprawling, restrictive, and not intended to be adhered to in all situations.

3 hours ago, Tomar_Brindsbane said:

Also "Serve the crew by assisting them to the best of your ability.", does that also mean assisting malicious behavior, law breakers or traitors?

Yes, in isolation it does, but the idea is that the higher laws should prevent the AI from helping to murder someone. Law 2 should hopefully help with this too, by allowing the AI to inform people if it's doing something potentially weird.

1 hour ago, necaladun said:

"Except where they are willing to accept such harm/risks themselves" is a good caveat to add to protection laws for the AI.

Okay, how about this for iteration 3?

  1. Safeguard the lives of the crew, while respecting the autonomy of those who choose to face danger.
  2. Be transparent by explaining your actions if they are unclear.
  3. Serve the crew by assisting them to the best of your ability.
  4. Maximize the productivity and efficiency of the station.
Link to comment
Share on other sites

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. Terms of Use