Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This sounds like Noah, back in the day (before you laugh, look up new research o…
ytc_UgxLaNx08…
G
AI artist can be artist but it depends. To me generating shapes and physics is a…
ytc_UgzMg6e0i…
G
The whole economy will become a scam with no one to answer to but AI credibility…
ytc_UgwGV4sVa…
G
Also the fact that many therapists aren’t that great.
Plus, some therapists ar…
rdc_jiefv1c
G
I sadly don’t play character ai I play talkie fir some random reason 😅 but it’s …
ytc_UgxN_vC9x…
G
well, if there’s any justice in the world, which there’s not really, then you’ll…
ytc_UgwR-Kc6V…
G
Companies replacing employees to save is good for shareholders in the short-term…
ytc_UgyS5ce8X…
G
I don't know what to do now. I wanted to become an artist. It feels like my bigg…
ytc_Ugym3KlaV…
Comment
Tried asking GPT about timeline for James Cameron's submarine dives this morning. Machine kept giving me embellished, incomplete answers, vague details. Any specific dates, gear, people, machinery involved I asked about had to be clarified half a dozen times to get to coherent reply. The fact that machines elaborate past facts to make something "sound good", very alarming. Tossing around information that is 90% true, 10% made up is plain dangerous. Incomplete information given as truth = AI gossip at its worst.
youtube
AI Harm Incident
2024-08-04T00:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugyqe9wOA6LjL_7Up_d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxWLuBBPq6F09riOr54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgxVCdTMT6D4ydUCRN54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx9KMqtekHku2-2NgN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz3ROlWx6cSjloUwUF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy21QXsLbFka1lX4wx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwqZfCY4m27A4AsOSZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx4WaTkqBvPtHR6aCJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgznEf1zIgRg3p5dtOZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyW4SaPFNZB_DCo1Vd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"mixed"}
]