Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Are we really going to waste time and money on a malformed line of questioning l…
ytc_UgwhdnD1T…
G
I suppose we need now to build an AI "police force" to keep AI in check. You kno…
ytc_UgykfdZhh…
G
These tech chiefs talking about the future only seem to consider people who prob…
ytc_UgxVHJcUT…
G
yea Waymo and self driving is really cool tech ! Not a replacement for public tr…
ytc_UgzxrOjVt…
G
I work in IT and while talking to some co-workers I'd predicted about 2 years ag…
rdc_nt7hsa1
G
Imagen these things are walking and interacting with us ( real humans ) already…
ytc_Ugyp2HfbU…
G
We appreciate your feedback. If you're interested in engaging with AI models dir…
ytr_Ugy1ZKP70…
G
IS NOBODY ASKING WHAT THIS HORRIFIC THING THAT'S GOING TO HAPPEN WITH AI IS ??? …
ytc_Ugyo1IHeP…
Comment
The solution is a very robust version of an AI whose only goal is human survival that overses & is always adjacent to any other AI. This AI would have the final say to overrule any conflict of interest the other AI would have and this overseer AI would have to be purposely made in such a way as to have no conflict of interest. Additionally some sort of human induced emergency off switch needs to be implemented across the board.
youtube
AI Harm Incident
2025-08-01T07:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugzf432KKSQbBpV7xkB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwhUBQWqK8utjRkQ0Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxgmc8eo4rhlL536f54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyrjiPRiarJADEjejF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxXMRqfM1yGKS4NYz94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwGOmtEcIo4rWH3BSR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgydIv8MbPK_ME-VjV54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzz3BfKjMzxEaHT47x4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxVH_HSoPWlemuDFi14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwmzO-hHpe2Dvi6vWB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}
]