Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
All people make mistakes. THE CREATOR of AI is human. What makes you believe tha…
ytc_Ugz3bcKLU…
G
Why why is everybody pushing fear? AI is a mirror of what we put into it but if …
ytc_UgzOgrUdi…
G
Real artist take another art as reference, but stop saying that AI does the same…
ytc_UgygAxAl4…
G
True, one time I asked an AI the strangest chat they’d have, one person apperant…
ytc_Ugz6lSND-…
G
If you make ‘truth’ the most important criteria for AI, a lot of these problems …
ytc_UgzZyupWM…
G
"i dont want ai to write stories so that i can do the dishes
I want the ai to do…
ytc_Ugys1JAAj…
G
If AI is intelligent, AI won't get humanity exctint, because if it does, earth w…
ytc_UgyEQI9nU…
G
Actually the value has gone down because surprise surprise, people are buying in…
ytr_UgxdOmAT3…
Comment
The thing is, I’ve had a coworker do something similar. They asked for a report on data we don’t have access to, I tried to explain it wasn’t possible, they then turned around and asked ChatGPT to write the report and sent that to me with instructions to “just clean it up a bit” - I say we can’t use it. They say we can. I then spend hours digging into everything it said and looking for every instance that’s contradictory or references data we do have access to so I can compare. Send a full report on the report. Finally get shock & horror “I didn’t know it could lie!” and we can finally start the actual project, redefined within the bounds of what we can access. 🤦🏼♀️
youtube
AI Responsibility
2023-06-11T01:0…
♥ 7000
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyRoaCg7x3OZJR4O6x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyiLQLlrYp1XYwKxsd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzze33_x1vOLOMvbtx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxrsiClrjgnVYUyrXB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzt_gypvVU0_8YnB3B4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz8zsBAsgMaTbCvlCh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"amusement"},
{"id":"ytc_UgykTuYLzn5a3d20mT14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzBSQmIDIkf6q3sc2Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz1-IV-b34l33M-zBZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"amusement"},
{"id":"ytc_UgzBRbP4UfPLeavcek54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}
]