Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
3:22 exactly. An AI also puts a piece of itself, and we get exposed to a piece o…
ytr_UgxkyUSf7…
G
The 3 laws will not work on A.I.
A.I. will be able to rewrite it's own code to o…
ytc_UgyvQGhLC…
G
I’m glad you find it as poetry . On the original post someone called it schizoph…
rdc_jvmjcwt
G
I actually stopped using github copilot for months now, because instead of incre…
ytc_UgzdN9iqV…
G
These people who have the ignorance to brag about how artists don't have a right…
ytc_UgxaIZ5Tj…
G
I asked ChatGPT if I get better responses when I'm more polite. The answer was "…
ytc_UgxUxIM1b…
G
@realtalk6195 That's a lot of rhetorical distraction you just laid out. It's fa…
ytr_UgyMXybvk…
G
This is exactly why companies are not people and should NEVER have been found to…
ytc_UgwtMTp85…
Comment
Gonna quote Dr Ian Malcolm from Jurassic Park here : "Your scientists were so preoccupied with whether they could, they didn't stop to think if they should." .... That said, I think Elon is right - not because it's dangerous for humans but because it will threaten the field itself. If we don't give society time to absorb AI and have it partake in its development, ultimately that would invite all sorts of oversight and regulatory lashback by the Government.
youtube
AI Governance
2023-03-30T14:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzAYGAnLWGwJxjoogl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxdnOdniNHKeh8YfIF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxobUIUDXDThJzsu2F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzDHZCcGELK5cpAcV94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyLd35E5ey7A6cP6KJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxwZYQZD4AHJu08dgp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxacG58xJoGxLlXJV94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy3Um-vEEKjNsQtfyd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyvWtNOL_eTfuic8bp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzbi0Ks8n9hKgI7R614AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}
]