Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I can see AI being practical for creating hypothetical scenarios for training la…
ytc_UgzX5nCxf…
G
I believe AI is dangerous, because a bunch of nerds with no social skills made i…
ytc_Ugypfytre…
G
Is this real?! Or is the ChatGPT voice just spoken by someone else?!?!
Is it ba…
ytc_Ugzo_ogtP…
G
We are safe until AI can control its own power supply and protect that power sup…
ytc_UgxjF5BPB…
G
I just hope that if we do soon see super artificial intelligence that they are m…
ytc_UgyNMaX1o…
G
Automation and AI will destroy the majority of human jobs. People best wake up …
ytc_UgzckZp4D…
G
I'll admit, AI art is fun to mess around with. But you can't take it further tha…
ytc_Ugz3kIVoy…
G
disarming them before/when brexit happens might be best? knowing now they side w…
rdc_enrf6l5
Comment
While I understand the idea and think it could be great... I've heard some people say that it's actually NOT a good idea to send more messages than what's really needed, because the use of these AI models releases a lot of carbon dioxyde and making the "conversation" unnecessarily long inevitably makes it worse... 😕
So I'm a bit divided on this issue!
youtube
AI Moral Status
2025-05-26T06:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyjG8JFRgRRd7PlCoB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxsX3USzUmW2rDJY8t4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwS0MrSWwnU620mgA54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzhsHMOSLm23X5ttM14AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzWRfwG21r07pdAUYh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyPVA3Rb7jN2VsdntF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyPx1jQjKwrDG203cF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx4ZEMaXX6r-BCQiVl4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyXfHGo4EhFMGhA06x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz5IcDhc-oVDevBy0x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]