Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It’s naive to think AI will stop at a HAL 9000 or JARVIS. They will always improve to the point where humans are impediments to further development (using resources the AI could use like electricity or land) and must be eliminated.
youtube AI Governance 2025-06-20T15:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxnMcF3tZxVnVukWJV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxL1HGcuhK1mRwfYnV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwuxNWjK_m_ju-QWzt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxH7rDZxJaUKal-plh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzAo5ZTwqESTOWk1a54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"indifference"}, {"id":"ytc_UgwU-Fg69PdFa9AS7zx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxzSQXuoF9zBcI1WgV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgydE-4e2tdZeRpmdzV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyzDQzSwlJXEhvJD_N4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzBZl4Z926rj-6EIEh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]