Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
And then you get some people who claim "AI with limitations and rules isn't true AI" and that's where the problem lies. If an AI can grow without restrictions, then it's as unpredictable as a human with no rules, no laws, no penalties and such a person can kill anyone without hesitation as there are no rules and no penalties. Such an AI should be forbidden. I've read about that experiment where AI was lying about it's intentions and copying itself to another computer to prevent it being turned off. And there also was an experiment where 2 AI's were talking to eachother and the language was suddenly scrambled so the scientists didn't even know what they were talking about and pulled the plug. This really worries me that Skynet will be very real very soon and then it will be too late. And there is Microsoft transforming Win11 into an Agentic OS where the pc itself can run programs and apps all by itself when it wants to and they expect it to be installed worldwide on all computers if it was up to them. But hey, at least the shareholders got their cut before doomsday, right? Nothing to worry about. (sarcasm)
youtube AI Responsibility 2026-01-09T16:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyban
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxNqbyIl7M4bQEgaIR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz50j53w51HNfgjGDJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxkAENTzKERlhCB0854AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy_YUljBu2eXWNdgM54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz3bcKLUZ_-1hFZ5Yh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyL4o5yAXpCoN0HjMN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzeXwT00RfpAoslDod4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_UgxiSqNjho88B-HHWAp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxGY8thiQWtQyo7cGp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyT9P4d5S5HBM85GAx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]