Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Now consider all the self-driving cars are in a network. this quickly helps the …
ytc_UgzlBBbuf…
G
I know what you mean but I’m guessing a lot of people don’t… saying you’re an “A…
ytr_Ugzf8rWE-…
G
data minig locations huge noise huge electricity pits, driverless cars etc.... A…
ytc_UgxRujLch…
G
All of these endless podcasts of mental masturbation about a technology that is …
ytc_Ugwm_HK6Y…
G
Automation & robotics have taken millions of jobs over many decades and ai will …
ytc_Ugzg_tHtR…
G
Isn’t the AI just referencing text it has read when responding to questions abou…
ytc_UgyifskYk…
G
There’s a fallacy analogy from the Chinese internet, called “this egg tastes ter…
ytc_UgxpJnvKc…
G
ChatGPT is not the best source of honesty, in fact it comes across as being full…
ytc_UgzEYfsEl…
Comment
If ai advances even more and switches to robotic machines, we are going to be extinct in no time, the robotic ai will have its own mind and will quickly hack the world and it's electronics and computers and online websites and use other robots to make advanced robots and make millions of robotic army and will kill all humans and make humanity extinct
youtube
AI Governance
2024-07-26T12:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgyKOguUYtBvqDzBGOp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz7Fq3egy7_QLEcRjp4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzVDMIoe-xlcUGctcl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzN5a4X0YFj1tL9-UJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyzTa6XQuuqLida4tJ4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwRBGZydDOMRTxE-Dd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxztbha2NzpayqG8OR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyHFHSLBV2UgDqkbCt4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy0cIVZF4OBvJpWCg54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy0jQJ0AeHJpmnZ9IN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}]