Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Argument that there should be a consent for the copyrighted material used by the…
ytc_UgwUqdBdg…
G
I don't know if it's hard to tell. Both of them looks AI generated to me.…
ytc_UgwvOu2Z9…
G
My ChatGPT insists it has no thoughts and no consciousness. I wonder if my AI is…
ytc_UgyRkMGYh…
G
The thing about art we draw for the process and the enjoyment of it is something…
ytc_UgzvEiQSe…
G
AI image prompters are mad at anyone who has a working hand and enough will to p…
ytc_UgxoSrrGH…
G
I mean I put more blame on low Standards of what constitutes art than anything. …
ytr_UgyzRnIIe…
G
Sophia, the AI robot in the video, does not go to a workout as she is an artific…
ytr_UgyEPHzsM…
G
This doesn't mean the DOD is blocked from using some AI for whatever they want.…
ytc_UgzZeNv4s…
Comment
I think we might be able to spot early signs of a rogue SSI (Super/Superintelligent AI). It would be too much of a leap for an AI to try to take control and then to master that ability and succeed. It would likely do a poor job at first, fail, and we would learn a lot from the event. It would likely fail in its first attempts EVEN if it was explicitly trained for that. That's what complex systems typically do. And that kind of failed attempt would raise general awareness of the dangers of the technology.
youtube
2024-07-04T12:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzxCtwIz0Jwz1x5_7J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzBXzC10J_L2EinSw14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy7aKamIoA8Z4toHb94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx-rLJfwOA2da_HZ2N4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgzLA12UdzUoUKKM7LV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwyB9C9mVGSKbP7h_d4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxp4WWdJ0ciOtEnIvh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxE2G5EiUwgpbqCnMV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwAGhLEXralhsxwru54AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy6DkhPQbEz3N556rx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]