Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
WAIT, didn't the A.I. robot, given the senerio that a train had a certain route but at the crossroads the A.I. was given the choice to stay on course and kill 5 ppl or take the other track and only kill one person. A.I. robot said it would stay on track and kill the 5 ppl because we don't know if there are extenuating circumstances involved. What about Googs 2 A.I robots that could interact with each other??? After two weeks the engineers figured out that the robots had developed there own language and we're talking about something that they could not decipher. They had to shut the robots down.
youtube AI Moral Status 2022-07-03T12:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgyZ8mppd2sZuJ3rddN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwXAM-B6-_rd5LK5pJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzI1qyKL3IjljvVDIZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugwy1txYvOe-gSzAUhJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwXEv5zJfUhogWG6kV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"mixed"} ]