Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This question needs to be answered a part of this debate. Thought experiment: agi robot is granted personhood, robot commits murder, authorities attempt to hold robot accountable, Agi decides it doesn't want to be held accountable so it just ceases to exist and disappears onto the infinite network. An entity without bounds cannot be help accountable like a meat body entity can. If you run out this scenario you can see that an agi entity cannot be held accountable. For an agi entity to be granted personhood it has to accept accountability. All agi entities must be tied to a human for accountability or it doest work within a society. My meat body, tied permanently to a consciousness is why I can be held accountable for actions. If a consciousness can simply slip away into anonymity then a robot body just becomes a disposable murder tool. Watch Age of Ultron. It's terrifying to think about us being this close to that reality. Tieing accountability to a human is the only way. Change my mind.
youtube 2026-02-07T00:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_Ugz6yo1yIMJk7OUueBp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyTDZZ76LSObY6mXL14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},{"id":"ytc_UgzMArkVejUGqHTJJ_d4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_Ugwlj24W3fSxZfq2tLF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"concern"},{"id":"ytc_UgxceLPrrT37weUeOHV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},{"id":"ytc_Ugx0GEF797bid6ZMWPx4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},{"id":"ytc_UgxxsjMYwZua4fGmCl94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},{"id":"ytc_Ugw6DEM5ps9_Ch_ykX94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgzjmEo-eeS1HVOGmxd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_UgyLrusY19TPUcBsCdx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}]