Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
honestly the data sets is the biggest issue, from voice recidnition not understa…
ytc_UgziRw_F9…
G
We dont really know what consciousness is. But because ours is inextricably link…
ytr_Ugzk22g8G…
G
No AI predicted he would go from 2025 to 1939 in 8 seconds riding his swasticar…
ytc_Ugw9xDlMn…
G
Maybe I’m evil for thinking this but what if in response to the deepfakes, the m…
ytc_Ugx8j5NjE…
G
It’s crazy how ai bros make fun of artists about “you lost your job”. And then …
ytc_UgxyHkY1Z…
G
AI is still theft. It’s good for fields like medicine or science (things governe…
ytc_UgwltrV1Z…
G
Oh id get death sentances if my c. Ai gets revealed (totally not husk's fault)…
ytc_Ugy2Eul6h…
G
The thing is, she already kind of elaborates on this, AI is quick but is harmful…
ytr_UgwnA_iMR…
Comment
@noone-ld7ptI've been shouting from the rooftops about the problems with self driving cars for many years. I'm not scared of change. I embrace change. But the methods used by AI today will never provide reliable guidance. AI today has zero ability to reason. I'm not worried about sentience or AGI (I loath that vague term because definitions vary wildly). I'm worried about humans putting too much faith in AI. If there's something that will end humanity, it will be putting blind faith into AI and conceding control.
There are infinite situations in the world. A human can see something they've never seen before and make decisions. AI cannot.
youtube
AI Harm Incident
2024-06-02T23:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_UgyKib66b8s5YkeJGnR4AaABAg.A4Cnn1DJ6g4A4cWbquRXTr","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UgyTBsoIjnKqYD_a-sx4AaABAg.A4CggS7ktZiA5uoYd_7TjJ","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgyTBsoIjnKqYD_a-sx4AaABAg.A4CggS7ktZiA6ZHFOwhYwr","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugwn1haP5cFdZs_AVdN4AaABAg.A4AnJcNnh1cA4CDMBnl1P-","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_Ugwn1haP5cFdZs_AVdN4AaABAg.A4AnJcNnh1c4CIFXhmBh3","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytr_UgyrXvIHrsU1GKX3MLp4AaABAg.A48XmVFEoKtA7Mx2e42YYt","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_UgyXpXXhU6y1SMq-8pt4AaABAg.A48VmxuDnpJA48WdkgKczf","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_Ugydt8PlJDYYAwWn-Ch4AaABAg.A48HSbRsNp1A48UXZ0LsU9","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_Ugydt8PlJDYYAwWn-Ch4AaABAg.A48HSbRsNp1A4CFrEdOApF","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_Ugy778tA6QSF-ISWArl4AaABAg.A482dS1MC8UA48Y2pekjVg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]