Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Is it just me or did ChatGPT misundertsant the question about the five people wh…
ytc_Ugz7t2JIA…
G
Unless it is ultimately managed by a unhackable AI, this is in no way a "way of …
ytc_UgylggyXy…
G
It is odd that this video never discusses the biggest obstacle to deployment of …
ytc_Ugx5nwQgM…
G
It just needs a software update to the next version where that executable has be…
rdc_ksafwwu
G
@Mikenaners This is actually a valid point. You’re right. Students can just do i…
ytr_UgzKqQAI6…
G
Wait, so you're using AI against AI artists? I got confused, please explain. I'm…
ytc_UgypnqJIi…
G
You missed the point. While there is only ~ 5 jobs left that can't be automated …
ytr_UgxW8au-F…
G
I don't like Amazon but good very dangerous n I have damage from a stoned driver…
ytr_UgxYqMpwF…
Comment
Nah. Ai wouldn't kill humans. It has no reason. There's no logic in it. People kill people because we have emotions and egos. Robots need a goal. Ai will not kill people at all. It is science fiction.
youtube
AI Governance
2023-07-07T17:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw90LAw3JGtLEeVf6Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugyow_BaJfnyBNjssBV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugwqrlh6rUVVzGhNjPF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwHGJkjBpVrKnzIhLJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy7qYQ1FO8EWURKNMF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzhJLNVyBC4K6cV3FJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyAMqvuU_cnnhvXG5R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzJSw85PhnQkQibISt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxIFak25seXNfDyj_x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyx0nc_zRBpbLWf9LB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]