Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
In my opinion, AI can't really become "superintelligent" because it doesn't have a physical form—it lacks a body. If scientists ever manage to move AI's awareness or intelligence from the digital world into an actual, physical body, that’s when we might start to regret creating AI in the first place.
youtube AI Governance 2025-11-18T14:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxqHzVlUvyo6ymWNLt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxVs4nMFTfeMTk3cCx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzY7MIXm9h2zkWwHap4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw0kI9PKQKHFrEZanF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugy6D83PoRolGZT80ZR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgyxPkQb6L7GFT8Dcct4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgypmTzgkSktdoBZY3d4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwJKhVSADMtJne9HmF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyU12hBDskqxOhkzvt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz6zlhkGYpb20NdT6B4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"} ]