Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI training what he describes, where the behavior is not what the company wanted…
ytc_Ugwhfc6qy…
G
Writing assignments need to end anyways as we know them. Either the kids are goo…
rdc_jvlwi23
G
The other day, I convinced ChatGPT that my cat is a math genius and that she can…
ytc_UgynbMMm4…
G
They look similar in the way that a chinese knock off MMO looks similar to it's …
ytc_Ugyphp8Sd…
G
I would argue that aerospace technologies do not have the same impact on civil u…
rdc_o88h9eb
G
Tesla's rely soley on cameras for their self driving vehicles, which is scary. I…
ytc_Ugz5HLgrs…
G
Still it chose the wrong lane turn left. An intelligent humann driver would have…
ytc_UgyFGHuy7…
G
Decades to come to think like humans, really? 1) Why would they want to think li…
ytc_UgyK8d5gS…
Comment
AI is already used for bad thinks, i.e. stealing famous peoples voices to sell rubbish to us, think Jorden Peterson in YouTube ads. This is just scratching the surface. Who will AI be beholden to? Imagine explosive drones run by AI that can make it's own emotionless decisions by.
youtube
AI Governance
2024-01-01T21:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzZURB72pN-H0rj1E14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx0rQ6F8W3yOfM5CGx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx9ucjPjRSa6i5psNN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyTtmnNWGcKuZlMNCN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx6TZlT67za1HQxP4Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugwy8vtt-SmOS66rlep4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwumRVU0VKnHdpT_uN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxbKfkDOzcD6cdwE0t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwveza6vDs8F_e5-sh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy3DzDJADYaMeQwrg94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"fear"}
]