Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
One of the giveaways of AI is the woman who mispronounced Ides in Ides of March.…
ytc_UgzWy942Y…
G
Prof. J'ai hâte de voir nos élèves devant vos petits robots. Ils vont littéralem…
ytc_UgyRLslpQ…
G
#tl;dr
The project aimed at using GPT-4 to generate a complete novel from scrat…
rdc_jdigch5
G
@RipFussxl ":)" as if you are giving good advice
so tell me when you lose access…
ytr_UgxhfL-RU…
G
You're forgetting one thing, it's more about selling dream to those who doesn't …
ytc_UgxH-6HVZ…
G
Nathan Watt
I'm only offering a potential answer. A computer could be the great…
ytr_Ugj9cXReQ…
G
People have been saying if an AI can do things that humans can do they can be we…
ytc_UgwZO0s-R…
G
It’s not rare at all to see Musk go silent like that in the middle of an intervi…
ytc_Ugzvjbm3d…
Comment
People keep saying "oh the driver should have known" or a "better driver would have prevented that", but that's exactly the point. You can't just expect everyone to be able to react perfectly at all times - exactly why these incidents happen. That's part of human error and what AI is there to improve.
youtube
2021-12-28T13:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzqzqZCuaxc2Xi3Rhd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw7VkeZfk6iMyLHMdx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzYyo3j8GDhqvKlEVR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxexkQg6tZdTYpmKgd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxC0uBDLUrtMBMyFZx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw_CFZRHmgDJIkCXKR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzix9MI7_81v0ODBxh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzEuAJQ1XsUCCnexfp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxdZWdMNE5pYggbp3x4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxN98Cl05jycq5Zf1l4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}
]