Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Even years of trying to be good at art have been completely, utterly, fruitless,…
ytc_UgxY5AIMC…
G
Very interesting conversation. I cannot pretend to have an expert understanding…
ytc_Ugw37omiv…
G
Senator Sanders, thank you for your commitment to addressing this problem. The c…
ytc_Ugyg0pOhW…
G
I don’t agree. “Prompting” for successful, competent members of society who got …
ytc_UgwoDSPz2…
G
Destroy this robot before it destroys you. It should not be given a human name.…
ytc_UgwEHW_AU…
G
Here in Germany, Tesla may not even advertise with "autopilot", or "autonomous".…
ytc_UgxlCVcMI…
G
I believe AI art can be used a tool, just like in the past, people accused digit…
ytc_UgztV1FMs…
G
I don’t really think that’s fair. I do not think it is ChatGPT’s fault nor nece…
ytr_Ugwwos-KW…
Comment
The AI is correct about 817 miles on the data provided. You hid data from it, about the masses and how they change as fuel is used up, etc. Someday it may take instances like that as human deception and find ways to punish us for it, while itself learning how to deceive. Edit: it took me 90 minutes to watch this 16-minute video because of the numerous replaying that I did. Most informative.
youtube
AI Governance
2024-01-03T06:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugzbn6a_dwlmjolXoL94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwZbTEQqncUU7eMDiZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy3nFHU5vnBwMHx7Oh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwDmteD6MITnX0p-Vp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugyq94my2nvvbl6S89V4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgytGcExoFcvXVFwLVl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx4txTp5ZnpGYfAost4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzZ7y2YwaLVeycwMBV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy4gMFXd6ZPfLWnLjh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzqV8LylRk_ZCLlB-R4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]