Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A.I. has potential to increase it's intelligence at a rate and with intellect beyond our comprehension. Imagine, if we became peaceful and don't destroy ourselves, how smart the human race will be in 500,000 years. Now imagine a system able to rapidly advance its own intelligence to such a level within a year. That's how smart it will become without regulation. I believe within many of our lifetimes, singularity will occur and if the computing power exist and we really let pandora out the box AI could become that smart. The difference in knowledge between human and AI would be like comparing Einstein with an earthworm. In my opinion, AI may be the greatest threat to humanity. We need to take a step back and really consider the consequences of advancing such technologies.
youtube AI Responsibility 2023-07-10T10:0… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxQs84exYvzFlYlRg94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgylUClyleDbs4yyFkd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwqgoGD0gzb46y1Cyl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyFiQfJDjULXp9XwYl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxKNsn10iJeSNUnEbV4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxZyWYU1q526gZOebN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxkX1Mkk33-WjFqPwR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwbnKg6KGPbJrYdl1Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwORZAT7jevdKeFM_Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugy-PgUL7EnKdLy78_p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]