Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@desertrat7634hey, you mentioned the VA I hope we can live with that. Anyhow I n…
ytr_UgyxGJHIn…
G
Something was really wrong with him mentally that caused him to believe he was i…
ytc_UgxqkEJm9…
G
A child could understand why replacing workers for robots is a bad idea.
Why can…
ytc_UgxTWzph-…
G
Im going to send chatgpt a picture of my butthole and see if it can guide me to …
ytc_UgyWphDeH…
G
WERE SO DOOM AI IS TAKE OUR POSITION 😭😭😭😭😭well not me Im a AI developer in the f…
ytc_UgzF8lSx3…
G
The robot *owners* are going to have a lot more free time for sure. The rest of …
rdc_j6grej9
G
We cant create an AI that self teaches itself and NOT expect it to teach itself …
ytc_UgzBaX8Oq…
G
Human society only works because we are interdependent on each other. AI removes…
ytc_UgyAKryGA…
Comment
Geoffrey Hinton’s concern should wake up every tech leader and policymaker. When the godfather of AI himself says we’re heading into dangerous territory, it’s not fearmongering—it’s foresight. The question isn’t can AI save the world, it’s will we guide it responsibly enough to make sure it does. Massive respect for his honesty.”
youtube
AI Governance
2025-04-19T17:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgwN-LgXBxORo0_hKpx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy2cD2Mx_4JFgBAbmh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyUzajb_NimB34QiWp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwtq4HXI0cmI9ZNRUd4AaABAg","responsibility":"media","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyHsPT9imoMiWSqfql4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwpPg7Hl77AHf4uGaF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz0iwVDuXQZvm3uqc14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzSgezPQxvfn5zPdf54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwW_bEPSL-Y9ca4Ywx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyGQrOXKzNALsxRbAB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}]