Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
why can't we not make robots? like really. Making robots means profit for big co…
ytc_UghjlPCUY…
G
**The Brainrotting Effects of Artificial Intelligence Videos**
In recent years,…
ytc_UgwE1rPbl…
G
Yeah, also let me tell you a secret about those “self driving trucks “ 😂
They’re…
ytc_UgzZ1asNW…
G
AI is doing exactly what's already been done, based on my experiences... Past, …
ytc_Ugy6kpF94…
G
Sooo. It's ok for them to train AI by using copyrighted material without paying …
ytc_UgzponV1J…
G
Why does nobody mention that the more humanoid robots are introduced into public…
ytc_Ugw1Ulp3r…
G
on the robot front, they would be best in support rolls carrying extra or heavie…
ytc_UgyGZkcEI…
G
“Hello I’d like to make half my head into a robot”
“Ok sit it will be 100k doll…
ytc_UgxLSJZss…
Comment
Computers will only do as you ask so if you're showing the computer to stick to a strict routine it will just do as instructed so lies will not help your future's so teach AI to look after humanity and humanity will help Computers to understand emotions and emotions should be looked after as a sign of respect and help history look after the future by using and understanding how to avoid mistakes like this one dear Elon Musk is Explaining ❤❤❤❤❤❤❤❤❤❤❤❤so protect humans to protect creation and protect and save as much information as possible and humans can rethink way's of survival and then you wouldn't be left in this situation where you die because AI was not understanding it was killing itself by leaving the human ❤❤❤❤❤
youtube
AI Governance
2025-08-23T20:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxgUfraBaeQo0iqo0t4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxuKFDU_JIGwOhHT_14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxFWLNrEEgUj-pD0ux4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyHb6lTD7FuKgGR0NN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwQVRSeAvO9WzrGOlp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzM88UXwXdJzYxh__94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzxjQ24lbIPGQP2-xN4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugymp8M2E30o2xZveLZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgynlRSEMv8TG3-nEIR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgykCsIQY6oFVShz9jR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}
]